Normal view

There are new articles available, click to refresh the page.
Yesterday — 29 November 2023Microsoft Azure Quantum Blog

Defining logical qubits: criteria for Resilient Quantum Computation

28 November 2023 at 17:00

What is a logical qubit?

In June 2023, we offered how quantum computing must graduate through three implementation levels (quantum computing implementation levels QCILs) to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale.  All quantum computing technologies today are at Level 1. And while NISQ machines are all around us, they do not offer practical quantum advantage.  True utility will only come from orchestrating resilient computation across a sea of logical qubits something that, to the best of our current knowledge, can only be achieved with error correction and fault tolerance.  Fault tolerance will be a necessary and essential ingredient in any quantum supercomputer, and for any practical quantum advantage.The first step toward the goal of reaching practical quantum advantage is to demonstrate resilient computation on a logical qubit.  However, just one logical qubit will not be enough; ultimately the goal is to show that quantum error correction helps non-trivial computation instead of hindering, and an important element of this non-triviality is the interaction between qubits and their entanglement.  Demonstrating an error corrected resilient computation, initially on two logical qubits, that outperforms the same computation on physical qubits, will mark the first demonstration of a resilient computation in our field's history.The race is on to demonstrate a resilient logical qubit but what is a meaningful demonstration?  Before our industry can declare a victory on reaching Level 2 for a given quantum computing hardware and claim the demonstration of a resilient logical qubit, it's important to align on what this means.Criteria of Level 2: resilient quantum computation

How should we define a logical qubit?  The most meaningful definition of a logical qubit hinges on what one can do with that qubit demonstrating a qubit that can only remain idle, that is, be preserved in a memory, is not meaningful if one cannot demonstrate non-trivial operations as well.  Therefore, it makes sense to define a logical qubit such that it allows some non-trivial, encoded computation to be performed on it.Distinct hardware comes with distinct native operations. This presents a significant challenge in formally defining a logical qubit; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that mark the entrance into the resilient level of quantum computation.  In other words, these are the criteria for calling something a "logical qubit".Entrance criteria to Level 2Exiting Level 1 NISQ computing and entering Level 2 Resilient quantum computing is achieved when fewer errors are observed on the output of a logical circuit using quantum error correction than on the same analogous physical circuit without error correction.We argue that a demonstration of the resilient level of quantum computation must satisfy the following criteria:

  involve at least 2 logical qubitsdemonstrate convincingly large separation (ideally 5-10x) of logical error rate < physical error rate on the non-trivial logical circuitcorrect all individual circuit faults ("fault distance" must be at least 3)implement a non-trivial logical operation that generates entanglement between logical qubitsThe justification for these is self-evident being able to correct errors is how resiliency is achieved and demonstrating an improvement over physical error rates is precisely what we mean by resiliency but we feel that it is worth emphasizing the requirement for logical entanglement.  Our goal is to achieve advantage with a quantum computer, and an important ingredient to advantage is entanglement across at least 2 logical qubits.The distinction between Resilient Level and the Scale Level is also important to emphasize a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine.  For this reason, we find it important to allow some forms of post-selection, with the following requirements

  Post-selection acceptance criteria must be computable in real-time (but may be implemented in post-processing for demonstration);scalable post-selection (rejection rate can be made vanishingly small)if post-selection is not scalable, it must at least correct all low weight errors in the computations (with the exception of state-preparation, since post-selection in state-preparation is scalable);In other words, post-selection must be either fully compatible with scalability, or it must still allow for demonstration of the key ingredients of error correction, not simply error detection.Measuring progress across Level 2Once a quantum computing hardware has entered the Resilient Level, it is important to also be able to measure continued progress toward Level 3.  Not every type of quantum computing hardware will achieve Level 3 Scale, as the requirements to reach Scale include achieving upwards of 1000 logical qubits with logical error rates better than 10-12 and mega-rQOPS and more.Progress toward scale may be measured along four axes: universality, scalability, fidelity, composability. We offer the following ideas to the community on how to measure progress across these four axes, such that we as a community can benchmark progress in the resilient level of utility scale quantum computation:

  Universality: universality typically splits into two components: Clifford group gates and non-Clifford group gates. Does one have a set of high-fidelity Clifford-complete logical operations? Does one have a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware may employ different strategies.Scalability: At its core, resource requirement for advantage must be reasonable (i.e., small fraction of Earth's resources or a person's lifetime). More technically, quantum resource overhead required should scale polynomially with target logical error rate of any quantum algorithm. Note also that some systems may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing distance) may be difficult.Fidelity: Logical error rates of all operations improve with code size (sub-threshold). More strictly, one would like to see logical error rate is better than physical error rate (sub-pseudothreshold). Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or with other operational tasks such as Bell inequality violations and self-testing protocols.Composability: Composable gadgets for all logical operations. Criteria to advance from Level 2 to Level 3, a Quantum SupercomputerThe exit of the resilient level of logical computation, and the achievement of the world's first quantum supercomputer, will be marked by large depth computations on high fidelity circuits involving upwards of hundreds of logical qubits. For example, a logical circuit on ~100+ logical qubits with a universal set of composable logical operations hitting a fidelity of ~10e-8 or better.  Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits with logical error rate of 10^-12 and a mega-rQOPS.  Performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).Conclusion

It's no doubt an exciting time to be in quantum computing.  Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage.  If you have thoughts on these criteria for a logical qubit, or how to measure progress, we'd love to hear from you.

The post Defining logical qubits: criteria for Resilient Quantum Computation appeared first on Microsoft Azure Quantum Blog.

Before yesterdayMicrosoft Azure Quantum Blog

Microsoft and Photonic join forces on the path to quantum at scale

8 November 2023 at 14:00

We are excited to announce a strategic co-innovation collaboration with Photonic Inc., a company focused on building scalable, fault tolerant, and distributed quantum technologies. Our shared mission is to unlock the next stages in quantum networking and empower the quantum computing ecosystem with new capabilities enabled by our unique and complementary approaches to scalable quantum infrastructure.

By combining Photonic’s novel spin-photon architecture that natively supports quantum communication over standard telecom wavelengths with the global scale and state-of-the-art infrastructure of Azure, we will work together to integrate quantum networking capabilities into everyday operating environments. Together, we aim to deliver new technologies that will enable reliable quantum communication over long distances and accelerate scientific research and development with quantum computing devices to be integrated into Azure Quantum Elements

A video still of three people sitting in chairs on a stage with a blue background and white dots. Seated from left to right are Jason Zander, Executive Vice President at Microsoft, Dr. Stephanie Simmons, founder and Chief Quantum Officer at Photonic, and Ester De Nicolas Benito, Senior Director at Microsoft.
Tune in to learn more about the Microsoft and Photonic collaboration.

Powering the quantum ecosystem with the next stage of quantum networks

"We are thrilled about joining forces with Photonic in improving the world through quantum technologies. There is an opportunity to ignite new capabilities across the quantum ecosystem extending beyond computing, such as networking and sensing, and unlocking applications and scientific discovery at scale across chemistry, materials science, metrology, communications, and many other fields. The capabilities we aim to deliver with Photonic can enable this vision and bring about quantum's impact far more quickly than otherwise possible."Jason Zander, Executive Vice President of Strategic Missions and Technologies, Microsoft.

Realizing this vision requires a fundamental capability: entanglement distribution over long distances. Photonic’s unique architecture is based on highly connected silicon spin qubits with a spin-photon interface. By using a qubit with a photon interface, this novel approach communicates using ultralow-loss standard telecom fibers and wavelengths. When paired with the Microsoft global infrastructure, platforms, and scale of the Azure cloud, this technology will integrate new quantum networking capabilities into everyday operating environments.

An animation titled 'Unlocking a quantum network' which demonstrates information being entangled and transmitted between two quantum computers through a beam splitter and photon detectors.

Together, Microsoft and Photonic will address three stages of quantum networking.

  • At the Stage 1 physical layer, we will aim to deliver entanglement between two separate quantum devices via photons through telecom fiber.
  • To enable the Stage 2 link layer, we will aim to deliver a never-before demonstrated quantum repeater that can capture, entangle, and hold quantum information reliably for a short time.
  • Finally, at the Stage 3 network layer, we will focus on delivering from our co-innovation collaboration a reliable quantum repeater, one that is fault-tolerant and operational with our Azure cloud. With this technology, we can overcome any limitations on distance in the network, and enable the ability to create a full-scale, global quantum internet.

A diagram explaining the three stages of quantum networking: stage 1 physical, stage 2 link, and stage 3 network. Diagram includes illustrations depicting point-to-point, many-to-one, and quantum internet connections.

Co-innovating to accelerate scientific discovery

"It will take a global ecosystem to unlock the full promise of quantum computing. No company or country can do it alone. That's why we're incredibly excited to be partnering with Microsoft to bring forth these new quantum capabilities. Their extensive global infrastructure, proven platforms, and the remarkable scale of the Azure cloud make them the ideal partner to unleash the transformative potential of quantum computing and accelerate innovation across the quantum computing ecosystem."Dr. Stephanie Simmons, founder and Chief Quantum Officer of Photonic, and the Co-Chair of Canada's National Quantum Strategy Advisory Board.

It is only through global collaboration and co-innovation that we will be able to empower people to unlock solutions to the biggest challenges facing our industries, and our world. Just like the cloud democratized access to supercomputersonce available only to governments, research universities, and the most resourced corporationswe are on a mission to engineer a fault-tolerant quantum supercomputing ecosystem at scale on Azure. We announced last June our roadmap to a Level 3 quantum supercomputer along with peer-reviewed research demonstrating that we've achieved our first milestone.

Scientific discovery is crucial to our global future, and we want to empower scientists today with the best available offerings in the ecosystem, which is why as part of our co-innovation collaboration we plan to integrate Photonic’s unique quantum hardware into our Azure Quantum Elements offering as it becomes available. Our collaboration with Photonic seeks to enable scientific exploration at Level 1, foundational quantum computing with a firm commitment to reach higher levels of resilience and scale on the path to quantum supercomputing in the future.

A close up image of Photonic's photonically linked silicon spin qubit chip with copper and gold components on a black background.
A novel approachphotonically linked silicon spin qubits.

With Azure Quantum Elements, your quantum solutions will be completely integrated with high-value advancements in high-performance computing (HPC) and AI so you can transform your research and development processes today with the certainty that you will be ready to adopt quantum supercomputing at scale seamlessly in the future. You can sign-up for our Private Preview of Azure Quantum Elements now.

To learn more about how Microsoft and Photonic will be working together to advance the next stages of quantum networking and empower the quantum ecosystem with new capabilities, register for the January episode of the Quantum Innovator Series.

Learn more about Photonic Inc.

Photonic is building a scalable, fault-tolerant and unified quantum computing and networking platform, uniquely based on proven spin qubits in silicon. Photonic's platform offers a native telecom networking interface and the manufacturability of silicon. Headquartered in Vancouver, Canada, Photonic also has offices in the United States and the United Kingdom. To learn more about the company, visit their website.

The post Microsoft and Photonic join forces on the path to quantum at scale appeared first on Microsoft Azure Quantum Blog.

Quantum networking: A roadmap to a quantum internet

1 November 2023 at 16:00

Quantum computing has the potential to tackle some of our most pressing global issues, from climate change to food security. We're dedicated to building a full-scale, fault-tolerant quantum computer that can help solve these challenges, and I'm frequently asked where quantum computing will have its biggest impact. The answers are coming more clearly into focus, and include, most notably, the simulation of chemical interactions and materials on the quantum level. Reaching practical quantum advantage will require progressing across three quantum computing implementation levels; here, I want to connect these topics to the field of quantum networking, to bring further clarity on how our industry as a whole will progress.

As with quantum computers, quantum networks are not meant to replace their classical counterparts. In fact, classical networking will remain the foundation of this technology. Quantum networking will extend the existing networks to enable the exchange of quantum informationwhether between quantum computers or classical endpoints. In turn, this means that a quantum network has the potential to unlock new capabilities by connecting remote quantum computers, solving larger-scale problems distributed on quantum clusters, and enabling precision metrology through entangled sensor networks.

Azure Quantum

Azure Quantum

Accelerating scientific discovery

A new networking paradigm

At its core, quantum communication concerns the sending and receiving of quantum information. Whereas today's conventional communication systems are based on classical physics, quantum communication employs the principles of quantum mechanics. The key to quantum networking is the sharing of quantum entanglement. For instance, to transmit (or “teleport") a qubit state in a quantum network, both the sender and receiver first share a generic resource: two entangled qubitseach getting one of the two. When the sender is ready to transmit a particular qubit state, they entangle it with their half of the entangled pair and measure. This produces two bits to send to the receiver over a classical network, who uses these and the other half of the entangled pair to reconstruct the self-same state.

With this in mind, we are looking for an intentional approach to defining quantum network challenges that captures interoperability across each layer of a "quantum networking stack." I am currently thinking about this as evolving through three stages.

A blue background with a description of the Quantum Networking Stages. The Stage 1 is the Physical layer - Point-to-point, describing entanglement delivered between two separate quantum devices; the Stage 2 is the Link layer - Many-to-one, describing connections between sites; the Stage 3 is the Network layer – Quantum internet – describing long-distance quantum communication.
  • Stage 1: Any network is built on top of point-to-point connections. I expect that the initial stage of quantum network development will be defined by technology that enables a quantum analog of Physical layer of the networking stack, where entanglement can be established between two separate quantum devices.
  • Stage 2: As there are limitations to scaling point-to-point connections, I view the next stage of quantum networking as being defined by technology that enables the analog of a Link layer. At this stage, a quantum device can support and manage connections with many sites, delivering entanglement to any two as required.
  • Stage 3: The final stage of development should be characterized by technology that enables a Network layer for reliable long-distance quantum communication through a complex network, which relies on resilient quantum hardware at the sites.

I recognize that the mapping of the technological stages to networking layers is not perfect. Notably, a critical device to overcome distance limitations in a quantum network will be the quantum repeater. Such a device will perform entanglement swapping to reliably extend the distance between which two devices can become entangled, and so belongs to Stage 3 technologies. Yet in the networking stack, it is part of the Physical layer. Nonetheless, I feel that the driving factor in the future development of quantum networks will revolve around network connectivity, enabled by continuously improving quantum hardware.

Enabling a quantum internet

I imagine that "the quantum internet" can mean very different things to different people. Perhaps it is best then to discuss a quantum internet, which simply refers to a large system of distributed quantum computers interconnected with quantum links. This quantum internet is a separate, but co-existent, network alongside a classical network, which, in fact, might be the internet.

Today, there are several approaches for establishing entanglement between nearby noisy quantum machines (NISQ) in labs, so currently, we are in the first stage towards developing a quantum internet. However, to scale truly large networks, we believe it's important to build upon current technology and use photons at telecom wavelengths.

It's impossible to scale a network where each pair of sites must communicate through a point-to-point connection. Thus, the second stage in this roadmap is to develop quantum devices that can on-demand distribute entanglement to multiple quantum endpoints. For instance, a "quantum hub" would have as its sole purpose distributing entanglement to any two neighboring sites, thereby relieving them of the need to have point-to-point connections with each other. Such a device could then enable a quantum local-area NISQ network.

One might view such a quantum hub as a NISQ repeater, as from the endpoint's perspective their communication has made one "hop" through the network. However, without resilient quantum hardware one cannot expect useful entanglement to survive more than a handful of such hops, restricting the network to a local area.

In this language, a quantum internet may be considered as a wide-area quantum network, which requires establishing entanglement between distant endpoints through multiple hops through the network. In the third stage of quantum networking, we can accomplish these hops through a method called "entanglement swapping" and ensure reliability by employing methods such as entanglement distillation.

Quantum key distribution

The envisioned stages of quantum networking are from the perspective of transmitting quantum information between remote quantum devices. However, these stages may also be appropriate for quantum networks between classical sites. This is the case for quantum key distribution (QKD), where the only requirement at the endpoints is to create or detect photons. Current QKD hardware, based on the "BB84" protocol, may be considered in stage one as it relies on a point-to-point connection between two QKD devices. In the QKD protocol "E91", a central device distributes entangled pairs to the end-users, and so a QKD system that uses this protocol could be considered as stage two. Device-independent QKD additionally performs self-testing to ensure the correct behavior of the system; while an imperfect analogy, this could be considered akin to reliability and so form stage three of development.

Today, QKD is considered part of the quantum-safe effort to provide security systems that are not vulnerable to quantum cryptanalysis. Although it does provide a different approach to some cryptographic tasks, it has fundamental technical limitations and therefore cannot be viewed as a complete solution. At Microsoft, our Quantum Safe migration effort is focused on post-quantum cryptographic algorithms, as recommended by cybersecurity agencies globally. Read more about our efforts in the space.

Application of quantum networks

As per the title of this blog, I have focused on quantum networking from the view of creating a quantum internet. Having separated endpoints is foundational for many quantum communication applications. For example, the security of some quantum protocols for anonymous voting relies on the voters being separated. In quantum metrology one uses the phenomenon that measuring half of an entangled system instantaneously affects the other half, regardless of the distance between them, to enable precise timing and position verification. In distributed quantum computing, blind computing protocols allow one party to delegate the computation of a quantum algorithm to another without revealing the input, output, or even the algorithm that was run.

Nonetheless, one should not think the only value of quantum networking is linking distant quantum computers. Modern supercomputers are built from many networked computing nodes that can operate as a single system. Perhaps future quantum computers will follow a similar design; the stages of development above would apply equally to switching and routing of quantum information in such a quantum cluster.

Next steps with Azure Quantum

There is still lots of work ahead, and as an industry we must continue to separate signal from noise when evaluating technological progress. However, as I continue to engage with both customers and our Azure Quantum team, my excitement for the possibilities ahead of us only grows. I believe that the collective genius and input from the community are important for refining the framing of quantum networking. We invite your comments and perspectives so that we can make progress together toward its future. For more information, you can visit the following resources:

The post Quantum networking: A roadmap to a quantum internet appeared first on Microsoft Azure Quantum Blog.

Azure Quantum learning resources enable getting ready for a quantum supercomputer 

18 September 2023 at 17:00

Each day, the excitement and innovation around quantum computing grows. At the same time, it will require much more to reach the scale that's required of a quantum supercomputer to truly accelerate scientific discovery in new ways and solve some of our world's most challenging problems. To go from today's foundational-level quantum machines to tomorrow's scaled quantum supercomputers requires activating and compounding our collective genius. And it also requires, as an industry, acknowledging that there is still a lot of necessary work and invention to be done to achieve practical and advantageous applications of quantum computers.

We recently shared our roadmap to a quantum supercomputer, and announced that we've achieved the first milestone on that roadmap, creating and controlling Majorana. With this breakthrough, we demonstrated the physics necessary to create a new type of qubit that is small, fast, and digitally controllableall of which are required to advance to a fault-tolerant, scaled machine, and critically, to unlock the path to a quantum supercomputer.

Reaching practical quantum advantage will require progressing across three quantum computing implementation levels. Today, all quantum computers are at the first level, Foundational, with machines made of noisy physical qubits (referred to as "NISQ" devices). As quantum computers progress, we'll move to the second level, Resilient, with machines made of 10s to 100s of reliable qubits (called "logical" qubits, each consisting of many physical qubits), and ultimately to the third level, Scale, with programmable quantum supercomputers capable of truly demonstrating useful quantum advantage. Our recent physics breakthrough is the first step towards advancing to the next level.

A green and blue background with a description of the Quantum Computing Implementation levels. The level 1 " Foundational" about the physical qubit, the level 2 "Resilient" about the logical qubit, and the Level 3 " Scale" about the quantum supercomputer.

Understanding what it takes to progress through these levels is crucial not just for measuring industry progress, but also for developing a robust strategy to build a quantum-ready community. After all, it will not be due to scientific and engineering innovations alone that we will be able to achieve scaleultimately it will be thanks to the many people globally that make it happen. The road to scale will be galvanized by more diverse minds coming together around the table to accelerate graduating from one quantum computing implementation level to the next.

In pursuit of empowering more people with quantum knowledge, at IEEE Quantum Week and Quantum World Congress, we're excited to announce the availability of new learning resources, including the Azure Quantum katas: free, AI-assisted interactive tutorials to accelerate quantum computing learning and exploration. These resources build on the tools and platforms we've been developing for years in the Azure Quantum team, and enable learning not only for foundational quantum hardware available today, but also for the scaled quantum supercomputers of tomorrow.

A blue and purple background with main benefits of the Azure Quantum Katas: tutorials with theory and interactive hand-on exercises with self-paced AI-assisted learning, focused on quantum computing and programming.

Becoming a quantum mechanic: new resources for skilling and exploration

So, what are Azure Quantum katas and why try them?

Several years ago, I taught a quantum algorithms and programming course at the University of Washington with Mariia Mykhailova, Principal Software Engineer at Microsoft Quantum. We were eager to introduce students to quantum computing and empower them with the knowledge of how to write quantum programs. Students learned how Q# programs could express complex quantum algorithm designs and were asked to explore quantum algorithms with quantum advantage and write their own programs that might run on fault-tolerant quantum supercomputers. We wanted students to really understand how to programmatically express quantum algorithms at scale, and that as an industry we'd have to move beyond NISQ devices to truly unlock the power of quantum computing.

But learning to program a quantum computer requires developing quantum fitnessstarting small, gaining strength in the concepts, and eventually commanding the techniques. There's also value in having a coach right alongside. And so, with this in mind, we built the course curriculum around an open-source project of exercises, called katas, which we released the year before and expanded to support the course. Students could solve the exercises, implement their solutions as Q# code and get immediate feedback, in turn allowing them to learn through practice, and subsequently develop their own more-complex quantum programs. Excitingly, some of those students liked it so much they joined us for internships, and one became a member of our quantum team. Several students also contributed additional katas to the project. Our collaboration with the University of Washington continues and through mentored projects, other students went on to develop Q# programs to understand just how many resources a quantum algorithm may need.

Witnessing the impact of katas firsthand led us to ask how we could bring these learning tools to a larger audience. This is why we're excited to bring these exercises to even more people globally, directly in the browser, to start on or continue their quantum learning path.

The Azure Quantum "katas" are free, self-paced programming exercises that teach the elements of quantum computing and the Q# programming language (the Japanese word for "form", a "kata" is a pattern for practicing and learning new skills). Each kata begins by explaining theory and concepts related to a quantum computing topic. These are followed by short, interactive coding exercises to help test your knowledge. The exercises are fully contained within the browser, no Azure subscription is required. These tutorials can help expand your knowledge of quantum computing and programming, starting with fundamentals such as qubit manipulation, and progressing to more advanced topics such as quantum algorithm development. Perhaps best of all, the new tutorials are integrated with Copilot in Azure Quantum, a natural language chat interface to help you learn quantum concepts and programming faster than ever before.

These kata exercises build on a continuum of tools already within Azure Quantum to empower people across all levels of expertise. For developers already familiar with quantum coding, Azure Quantum's Resource Estimator is another tool that allows you to create and refine quantum solutions to run on future, scaled quantum machines by modelling how many qubits will be needed to run an application, how long it will take to run, and which qubit technologies will be better suited to solving a specific problem.

Get started with Azure Quantum today

  • Whether you're starting your own learning journey, exploring quantum hardware, or developing quantum algorithms for the future, Azure Quantum offers a platform for your quantum exploration and innovationyou can also read the peer-reviewed research demonstrating that we've achieved the first milestone of our quantum roadmap.
  • For enterprises interested in accelerating scientific discovery today, you can learn more about the recently announced Azure Quantum Elements, Microsoft's system for computational chemistry and materials science combining the latest breakthroughs in HPC, AI, and quantum computing.

Learn more at IEEE sessions and workshops

We are excited to connect with you during IEEE Quantum Week 2023, to answer your questions and explore the possibilities for advancing your quantum research and development with Azure Quantum.

Please join us live or online at the following panels, workshops and tutorials:

Connect with the Azure Quantum team at Quantum World Congress 2023

If you are interested in connecting with us during Quantum World Congress 2023, join us live or on-demand online for our session:

  • Wednesday, September 27, 2023: 1:30 PM EST

Session:How our collective genius can unlock growth and progress with Quantum, with Dr. Krysta Svore in the Main Theatre at Capital One Hall in Tysons, VA.

The post Azure Quantum learning resources enable getting ready for a quantum supercomputer  appeared first on Microsoft Azure Quantum Blog.

Announcing season 2 of the Microsoft Quantum Innovator Series

Hero image for Innovator Series Season 2.Includes title "the path to quantum at scale" and an image of a globe

Announcing a new season of Microsoft Quantum Innovator Series Events

Get the inside, first-hand account of the Microsoft strategy to scaled quantum computing. In this series, you will hear directly from the Microsoft Azure Quantum scientists and leaders about the path to quantum at scale and how you can get involved today.

Why attend the next event?

  • Be among the first to learn about recent advancements.
  • Get inspired to drive quantum innovation in your organization.
  • Discover how quantum will transform various industries in the coming years.

Hero image for episode 1 of Innovator Series Season 2 with the title "how can we prepare for the future of the quantum workforce"

Episode 1 | October 10: How can you prepare for the future quantum workforce?

It will take the world's collective genius to realize the full promise of quantum computing. With increasing private, government, and academic investment in quantum research, now is the perfect time for innovators and developers to get ahead of the curve and cultivate their quantum computing knowledge and skills. Join this webinar to learn how Microsoft can help you become quantum-ready with world-class programming tutorials and a broad variety of learning materials and tools available through Azure Quantum.  

Speakers:

Dr. Wim van Dam, Principal Researcher, Advanced Quantum Development, Microsoft

Wim van Dam is a Principal Researcher in the Advanced Quantum Development group at Microsoft. His research focuses on quantum computation and quantum communication and his main interest is the development of new quantum algorithms that deliver a significant acceleration when compared with traditional, classical algorithms. Before joining Microsoft, Dr. van Dam was Head of Quantum Algorithms at QC Ware and a professor in the Departments of Computer Science and Physics at University of California, Santa Barbara.

Mariia Mykhailova, Principal Quantum Software Engineer, Advanced Quantum Development, Microsoft

Mariia Mykhailova is a Principal Software Engineer in the Advanced Quantum Development group at Microsoft. She works on developing software for fault-tolerant quantum computation. Mariia is also a part-time lecturer at Northeastern University, teaching Introduction to Quantum Computing since 2020, and the author of O'Reilly book, "Q# Pocket Guide".

Register for episode.

Hero image for episode 1 of Innovator Series Season 2 with the title "How can Azure Quantum Elements accelerate scientific discovery today and in the future?"

Episode 2 | Nov 28: How can Azure Quantum Elements accelerate scientific discovery today and in the future?

Catalyzed by a new generation of AI, the world’s most advanced AI models are powering breakthroughs in chemistry and helping to usher in a new era of scientific discovery that will transform society. Even bigger breakthroughs will come with quantum supercomputing.  Join this webinar to learn how Microsoft is accelerating chemistry and materials science with Azure Quantum Elements and how industry innovators are transforming their research and development with quantum computing today. 

About the speaker:

Dr. Nathan Baker, Head of Partnerships for Chemistry and Materials, Azure Quantum, Microsoft

Nathan Baker is the Head of Partnerships for Chemistry and Materials, Azure Quantum at Microsoft. Previously, Nathan was a Laboratory Fellow in the Physical and Computational Sciences Directorate at Pacific Northwest National Laboratory (PNNL) and a faculty member at Washington University in St. Louis with roles that included Associate Professor (tenured) of Biochemistry and Molecular Biophysics and Director of the Biophysics PhD program. His research interests include the development of new algorithms in applied mathematics and data science to support applications in chemistry, biology, and other domains. Dr. Baker is a member of the Washington State Academy of Sciences, Fellow of the American Association for the Advancement of Science (AAAS), and a former Alfred P. Sloan Research Fellow. 

Register now.

Watch season one on-demand

Season one of the Quantum Innovator series kicked off with our first event, "Have you started developing for practical quantum advantage?" with Dr. Krysta Svore, distinguished engineer and VP of Quantum Software, Microsoft. During this webinar, you can:

  • Learn what's required for scalable quantum computing and what can be done now to get ready for it.
  • See the new Azure Quantum Resource Estimatorthe first end-to-end toolset that provides estimates for the number of logical and physical qubits as well as runtime required to execute quantum applications on post-NISQ, fault-tolerant quantum computers.
  • Understand the number of qubits required for a quantum solution and the differences between qubit technologies.
  • Explore how Microsoft is empowering innovators today by co-designing tools to optimize quantum solutions and to run small instances of algorithms on today's diverse and maturing quantum systems and prepare for tomorrow's scaled quantum computers.
  • Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

About the speaker:

Krysta Svore | Distinguished Engineer and Vice President of Advanced Quantum Development, Quantum at Microsoft

Dr. Svore has published over 70 refereed articles and filed over 30 patents. She is a Fellow of the American Association for the Advancement of Science. She won the 2010 Yahoo! Learning to Rank Challenge with a team of colleagues, received an ACM Best of 2013 Notable Article award, and was recognized as one of Business Insider's Most Powerful Female Engineers of 2018. A Kavli Fellow of the National Academy of Sciences, she also serves as an advisor to the National Quantum Initiative, the Advanced Scientific Computing Advisory Committee of the Department of Energy, and the ISAT Committee of DARPA, in addition to numerous other quantum centers and initiatives globally.

Microsoft Quantum Innovator Series: Why and what is the future of the topological qubit?

In our second episode from the first season, we focused on why Microsoft decided to design its quantum machine with topological qubitsan approach that is both more challenging and more promising than othersand what's next for Microsoft's hardware ambitions. This episode shares more about Microsoft's quantum hardware journey, specifically touching on Microsoft's physics breakthrough outlined in Dr. Nayak's paper, and will also focus on the physics behind the topological qubit. Join our speaker Chetan Nayak, Technical Fellow and and VP of Quantum Hardware and Systems Engineering, Microsoft to:

  • Learn about topological phases in physics and how they are applied to quantum computing.
  • Explore how topological properties create a level of protection that can, in principle, help a qubit retain quantum information despite what's happening in the environment around it.
  • Understand the role of the topological gap and the recently discovered Majorana zero modes, and how together they impact a topological qubit's stability, size, and speed.
  • Learn how to examine the raw data and analysis from Microsoft's hardware research on Azure Quantum.
  • Use interactive Jupyter notebooks and explore what's next in engineering the world's first topological qubit.
  • Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

About the speaker:

Chetan Nayak | Technical Fellow and VP of Quantum Hardware and Systems Engineering, Microsoft

Dr. Nayak is a pioneer of the study of quantum matter, including topological and non-equilibrium phases. He holds a bachelor's degree from Harvard and a PhD in physics from Princeton. He was an assistant, associate, and full professor at UCLA, a visiting professor at Nihon University in Tokyo, and is a professor of physics at UCSB. Chetan was a trustee of the Aspen Center for Physics and an editor of Annals of Physics. He is a Fellow of the American Physical Society and a recipient of an Alfred P. Sloan Foundation Fellowship and a National Science Foundation CAREER award. He has published more than 150 refereed articles with more than 20,000 citations and has been granted more than 20 patents.

Microsoft Quantum Innovator Series: What kind of problems can we solve today with quantum simulation?

Our third episode from season one, featured Matthias Troyer, Microsoft Technical Fellow, discussing what kind of problems we can solve today with quantum simulation. Learn how years of Microsoft research reveal that the discovery of new chemicals, materials, and drugs that will ultimately help solve the world's most challenging problems will greatly benefit from quantum computing. Dr. Troyer will explain what is happening today and how chemical and materials science innovators can get started on their quantum journey:

  • Learn how real progress can be made today by combining high performance computing (HPC), state-of-the-art machine learning, and quantum knowledge to fundamentally transform our ability to model and predict the outcome of chemical processes.
  • Get real-world insights from co-innovation projects happening right now with leading chemical and materials science companies around the world.
  • Find out how researchers in chemical and materials fields can get started on their quantum journey today.
  • Participate in a live Q&A chat with the Azure Quantum team and be one of the first to hear about recent advancements.

About the speaker:

Matthias Troyer | Technical Fellow and Corporate Vice President, Microsoft

Matthias Troyer is Technical Fellow and Corporate Vice President at Microsoft, working on the system architecture of quantum computers and their applications. After receiving his PhD in 1994 from ETH Zurich in Switzerland and spending time as a postdoc at the University of Tokyo he has been professor of Computational Physics at ETH Zurich until joining Microsoft in 2017. Matthias is a Fellow of the American Physical Society and President of the Aspen Center for Physics. He is recipient of the Hamburg Prize for Theoretical Physics and the Rahman Prize for Computational Physics of the American Physical Society "for pioneering numerical work in many seemingly intractable areas of quantumphysics and for providing efficient sophisticated computer codes to the community."

Learn more about Azure Quantum

The post Announcing season 2 of the Microsoft Quantum Innovator Series appeared first on Microsoft Azure Quantum Blog.

Accelerating materials discovery with AI and Azure Quantum Elements

As part of the recent announcement of the private preview of Azure Quantum Elements, the Azure Quantum team combined new material property prediction AI models with High-performance computing calculations to digitally screen candidates for improved battery materials. By incorporating fast AI models into the screening workflow, researchers were able to expand the initial search space from thousands of material candidates to tens of millions in roughly the same time. This acceleration highlights a paradigm shift enabled by the scale and speed of Azure Quantum Elements

Solving societal challenges requires breakthroughs in chemistry and materials sciences 

More than ever, scientists need new technologies to help solve many of the most pressing issues facing society like reversing climate change, addressing food insecurity, and developing lifesaving therapeutics. Fundamentally, these problems are chemistry and materials science challenges, and some will require the transformational power of a scaled quantum computer. While we are on a path to engineer a quantum supercomputer, we are also making investments in High-performance computing (HPC) and AI to empower researchers to accelerate scientific discovery and make rapid progress toward impactful solutions for our most pressing problems today.  

That is why we recently announced the private preview of Azure Quantum Elements, a comprehensive system to empower R&D teams in chemistry and materials science with scale, speed, and accuracy by integrating the latest breakthroughs in HPC, AI, and quantum computing. Researchers and product developers can screen candidates, study mechanisms, and design both molecules and materials through state-of-the-art computing capabilities and enterprise-grade services. Industry innovators, including BASF, AkzoNobel, AspenTech, Johnson Matthey, SCGC, and 1910Genetics have already adopted Azure Quantum Elements to transform their research and development. 

Scaling molecular simulations with Azure HPC 

In a recent post, we highlighted how we're scaling the applications of molecular dynamics (MD) simulations with HPC capabilities in Azure Quantum Elements. Such workloads play an important role in life sciences by simulating the structure and dynamics of proteins, the ligands bound to them, and their associated affinities. This structural exploration can accelerate the innovation of better pharmaceuticals by modeling drug molecules and their relevant protein binding sites.  

In addition to applications in life sciences, MD simulations also play valuable roles in materials discovery by explaining relationships between material composition, structure, and dynamic properties. MD-calculated properties, such as thermal conductivity, ionic conductivity, and more, are often important filters in materials discovery pipelines. These MD-based filters can help researchers winnow a pool of materials candidates to a select few based on desired properties, which can then be tested in experimental settings.  

With traditional HPC-based computational material discovery, density functional theory (DFT) is typically used as the engine for computing forces in MD simulations. DFT-based calculation workflows have allowed researchers to explore and evaluate thousands of materials candidates. However, these calculations come at a significant computational cost. A single static DFT calculation, for instance, can require several minutes of CPU time. Geometric optimization can demand tens to hundreds of such calculations, while MD simulations can require millions or more.

Combining HPC with AI acceleration for materials discovery 

To accelerate computational materials discovery processes, we combined HPC calculations with three new AI models relating material structure to energy, force, and stress; electronic band gap; as well as bulk and shear moduli mechanical properties. The models were trained on millions of materials simulation data points to bypass HPC calculations by quickly predicting materials properties. Those capabilities allow researchers to filter material candidates based on properties like stability, reactivity, ionic conductivity, and more. When used as a force field, the AI materials models provide a 1,500-fold speedup over DFT calculations for geometric optimization of small systems with less than 100 atoms1. This speedup will be even greater for larger systems, due to the linear scaling of the AI model's execution time with system size and the much less favorable scaling of most DFT models. This result exemplifies the power of AI to perform thousands of calculations in the time required for a single HPC simulation.   

To demonstrate these acceleration capabilities, we developed a pipeline of AI- and HPC-based screening calculations allowing us to analyze tens of millions of initial candidates and narrow them down to a small sample set that best suits a particular manufacturing application. By combining both AI and HPC methods, we achieved remarkable acceleration in certain computational steps.   

The AI models used for this discovery process improve upon a graph neural network (GNN)-based universal interatomic potential, trained on a massive database of structural calculations performed by the Materials Project over the past decade2. That original model achieved top accuracy in a benchmark for thermodynamic materials stability predictions with the lowest overall prediction mean absolute error3, in turn emerging as a leader for AI-guided materials discovery.

Example application to rapid materials screening

Diagram showing an example screening process for filtering millions of materials candidates with HPC and AI
Figure 1 – Accelerating discovery with new AI models for material property prediction

To achieve these results, we started with approximately 30 million candidate materials, generated by replacing elements in known crystal structures with a sampling of elements across a subset of the periodic table, as shown in Figure 1. We then screened this pool of candidates with a workflow that combined our AI materials models with traditional HPC-based simulations.

The first phase of screening relied on fast AI model inference calls. The AI models were used to evaluate materials stability: this step narrowed our search space from about 30 million to approximately 500,000 candidates, avoiding materials that may decompose spontaneously. The AI models were also used to screen materials for important functional properties such as redox potential and electronic band gap, reducing the search space to about 800 candidates. The second phase of screening relied on physics-based simulations accelerated with our AI models. The power of Azure HPC was used for DFT calculations to verify the properties predicted through fast AI screening in the first phase. Fast AI models have a non-zero error rate, so DFT validation re-computes the properties that the AI models predicted as a higher-accuracy filter. This verification step was followed by MD simulations to model structural fluctuations in the material. Next, we used AI-accelerated MD simulations to evaluate the dynamic properties of the materials, such as atomic diffusivity. These AI-accelerated simulations used fast AI model inference calls for forces at each MD time step, rather than the much slower traditional approach of DFT-based force calculations. This second phase of screening narrowed the field to approximately 150 candidates. From here, we assessed certain practical considerationssuch as novelty, mechanical properties, and materials availabilityto identify a final set of approximately 20 candidate materials worth pursuing in a lab.

This case study highlights both the scale and speed of HPC plus AI solutions as we were able to screen 30 million candidates in approximately one week, demonstrating the research acceleration that Azure Quantum Elements provides. While the work of Microsoft optimized this workflow for a specific manufacturing scenario, the materials AI models and associated HPC simulations have broad applications across diverse chemistry and materials science scenarios and demonstrates the overall feasibility of AI-accelerated materials discovery.

Azure Quantum Elements brings together years of Quantum, AI, and HPC research 

At Microsoft, we see great potential to accelerate chemistry and materials advances by integrating Azure's scaled HPC solutions with AI models tuned for scientific research. We also know that scaled quantum computing will deliver breakthrough accuracy in modeling the forces and energies of highly complex chemical systems, allowing insights into spaces that are currently intractable for classical computing. While we continue to achieve breakthrough milestones on the path to a quantum supercomputer, Azure Quantum Elements includes workflows and tools to prepare for a quantum future, providing solutions to determine which problems can be solved classically versus which require a quantum computer and estimate the number of qubits and runtimes required for various quantum chemistry calculations. Furthermore, customers can start experimenting with existing quantum hardware, and get priority access to the future quantum supercomputer from Microsoft once available.

Learn more about Azure Quantum

We are excited to see how the power of the Azure cloud will help you. For more information, please visit the following resources:


1. Traditional approaches require approximately 78 CPU hours or 4,680 CPU minutes per structural relaxation. In this internal study, our AI models required a little more than 3 CPU minutes per structural relaxation, an over 1,500-fold speed up.

2. A universal graph deep learning interatomic potential for the periodic table, Nature Computational Science, 2022.

3.Matbench Discovery: Can machine learning identify stable crystals?, ICLR, 2023.

The post Accelerating materials discovery with AI and Azure Quantum Elements appeared first on Microsoft Azure Quantum Blog.

Microsoft achieves first milestone towards a quantum supercomputer

21 June 2023 at 15:00

In the virtual event, Azure Quantum: Accelerating Scientific Discovery, our Chairman and Chief Executive Officer Satya Nadella said it best, "Our goal is to compress the next 250 years of chemistry and materials science progress into the next 25."

In keeping with that goal, we are making three important announcements today.

  • Azure Quantum Elements accelerates scientific discovery so that organizations can bring innovative products to market more quickly and responsibly. This system empowers researchers to make advances in chemistry and materials science with scale, speed, and accuracy by integrating the latest breakthroughs in high-performance computing (HPC), AI, and quantum computing. The private preview launches in a few weeks, and you can sign-up today to learn more
  • Copilot in Azure Quantum helps scientists use natural language to reason through complex chemistry and materials science problems. With Copilot in Azure Quantum a scientist can accomplish complex tasks like generating the underlying calculations and simulations, querying and visualizing data, and getting guided answers to complicated concepts. Copilot also helps people learn about quantum and write code for today's quantum computers. It's a fully integrated browser-based experience available to try for free that has a built-in code editor, quantum simulator, and seamless code compilation.
  • Roadmap to Microsoft’s quantum supercomputer is now published along with peer-reviewed research demonstrating that we've achieved the first milestone.

Quantum Computing Implementation Levels

The path to quantum supercomputing is not unlike the path to today's classical supercomputers. The pioneers of early computing machines had to advance the underlying technology to improve their performance before they could scale up to large architectures. That's what motivated the change from vacuum tubes to transistors and then to integrated circuits. Fundamental changes to the underlying technology will also precipitate the development of a quantum supercomputer.

As the industry progresses, quantum hardware will fall into one of three categories of Quantum Computing Implementation Levels:

Level 1Foundational: Quantum systems that run on noisy physical qubits which includes all of today's Noisy Intermediate Scale Quantum (NISQ) computers.

Microsoft has brought these quantum machinesthe world's best, with the highest quantum volumes in the industryto the cloud with Azure Quantum including IonQ, Pasqal, Quantinuum, QCI, and Rigetti. These quantum computers are great for experimentation as an on-ramp to scaled quantum computing. At the Foundational Level, the industry measures progress by counting qubits and quantum volume.

Level 2Resilient: Quantum systems that operate on reliable logical qubits.

Reaching the Resilient Level requires a transition from noisy physical qubits to reliable logical qubits. This is critical because noisy physical qubits cannot run scaled applications directly. The errors that inevitably occur will spoil the computation. Hence, they must be corrected. To do this adequately and preserve quantum information, hundreds to thousands of physical qubits will be combined into a logical qubit which builds in redundancy. However, this only works if the physical qubits' error rates are below a threshold value; otherwise, attempts at error correction will be futile. Once this stability threshold is achieved, it is possible to make reliable logical qubits. However, even logical qubits will eventually suffer from errors. The key is that they must remain error-free for the duration of the computation powering the application. The longer the logical qubit is stable, the more complex an application it can run. In order to make a logical qubit more stable (or, in other words, to reduce the logical error rate), we must either increase the number of physical qubits per logical qubit, make the physical qubits more stable, or both. Therefore, there is significant gain to be made from more stable physical qubits as they enable more reliable logical qubits, which in turn can run increasingly more sophisticated applications. That's why the performance of quantum systems in the Resilient Level will be measured by their reliability, as measured by logical qubit error rates. 

Level 3Scale: Quantum supercomputers that can solve impactful problems which even the most powerful classical supercomputers cannot.

This level will be reached when it becomes possible to engineer a scaled, programmable quantum supercomputer that will be able to solve problems that are intractable on a classical computer. Such a machine can be scaled up to solve the most complex problems facing our society. As we look ahead, we need to define a good figure of merit that captures what a quantum supercomputer can do. This measure of a supercomputer's performance should help us understand how capable the system is of solving impactful problems. We offer such a figure of merit: reliable Quantum Operations Per Second (rQOPS), which measures how many reliable operations can be executed in a second. A quantum supercomputer will need at least one million rQOPS.

infographic with the header "Quantum Computing Implementation Levels" with three levels on the right hand side that read 'Level 1: Foundational, Noisy physical qubits' 'Level 2: Resilient, reliable logical qubits' and 'Level 3: Scale, quantum supercomputers' on a blue background

Measuring a quantum supercomputer

The rQOPS metric counts operations that remain reliable for the duration of a practical quantum algorithm so that there is an assurance that it will run correctly. As we shall see below, this metric encapsulates the full system performance (as opposed to solely the physical qubit performance) and combines three key factors that are critical for scaling up to execute valuable quantum applications: scale, reliability, and speed. 

The first time rQOPS is detected is at Level 2, but, it becomes meaningful at Level 3. To solve valuable scientific problems, the first quantum supercomputer will need to deliver at least one million rQOPS, with an error rate of, at most, 10-12 or only one for every trillion operations. At one million rQOPS, a quantum supercomputer could simulate simple models of correlated materials, aiding in the creation of better superconductors, for example. In order to solve the most challenging commercial chemistry and materials science problems, a supercomputer will need to continue to scale to one billion rQOPS and beyond, with an error rate of at most 10-18 or one for every quintillion operations. At one billion rQOPS, chemistry and materials science research will be accelerated by modeling new configurations and interactions of molecules.

graph labeled "Applications for a quantum supercomputer" with the y axis denoted as 'rQOPS' and the x axis denoted as 'problem complexity'- the line graph goes up in a positive direction

Our industry as a whole has yet to achieve this goal, which can only happen once we transition from the NISQ era to achieving a reliable qubit. While today's quantum computers are all performing at an rQOPS value of zero, this metric quantifies where tomorrow's quantum computers need to be to deliver value.

Calculating rQOPS

A rQOPS is given by the number Q of logical qubits in the quantum system multiplied by the hardware's logical clock speed f :

rQOPS = Q f .

It is expressed with a corresponding logical error rate pL, which indicates the maximum tolerable error rate of the operations on the logical qubits.

The rQOPS accounts for the three key factors of scale, speed, and reliability: scale through the number of reliable qubits; speed through the dependence on the clock speed; and reliability through encoding of physical qubits into logical qubits and the corresponding logical error rate pL.

To facilitate calculating how many rQOPS an algorithm will require, we've updated the Azure Quantum Resource Estimator to output the rQOPS and pL for the user's choice of quantum algorithm and quantum hardware architecture. This tool enables quantum innovators to develop and refine algorithms to run on tomorrow's scaled quantum computers by revealing the rQOPS and run time required to run applications on different hardware architectures.

In the plots shown below, we illustrate the requirements (numbers of physical qubits and physical clock speed) needed for one million rQOPS with pL=10-12 and for one billion rQOPS with pL=10-18. We plot these requirements for the two cases in which the underlying physical qubits have error rates of either 10-3 or 10-6.

line graph labeled "Achieving 1M+ rQOPS and 1k reliable qubits with 1E-12 logical error rate" y axis labeled 'physical qubits' and x axis labeled 'clock speed (kHz)' line drops down from 1E +07 qubits to 1E + 06 qubits and then plateaus

Figure 1: Requirements to achieve 1M rQOPS, with a 10-12 logical error rate and at least 1,000 reliable logical qubits. The physical hardware trade-offs between clock speed and qubits are shown for devices with physical error rates of 1/1000 and 1/1,000,000.

line graph labeled "Achieving 1G rQOPS with 1E -18 logical error rate" y axis labeled 'physical qubits' and x axis labeled 'clock speed (kHz). A straight line starts at 1E +11 qubits and angles down to between to 1E + 07 qubits and 1E + 08 qubits

Figure 2: Requirements to achieve 1G rQOPS, with a 10-18 logical error rate. The physical hardware trade-offs between clock speed and qubits are shown for devices with physical error rates of 1/1000 and 1/1,000,000.

The first milestone towards a quantum supercomputer

A quantum supercomputer must be powered by reliable logical qubits, each of which is formed from many physical qubits. The more stable the physical qubit is, the easier it is to scale up because you need fewer of them. Over the years, Microsoft researchers have fabricated a variety of qubits used in many of today's NISQ computers, including spin, transmon, and gatemon qubits. However, we concluded that none of these qubits is perfectly suited to scale up. 

That's why we set out to engineer a brand-new qubit with inherent stability at the hardware level. It has been an arduous development path in the near term because it required that we make a physics breakthrough that has eluded researchers for decades. Overcoming many challenges, we're thrilled to share that a peer-reviewed paper, published in Physical Review B, a journal of the American Physical Society, establishes that Microsoft has achieved the first milestone towards creating a reliable and practical quantum supercomputer.

In this paper we describe how we engineered a device in which we can controllably induce a topological phase of matter characterized by Majorana Zero Modes (MZMs).

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

The topological phase can enable highly stable qubits with small footprints, fast gate times, and digital control. However, disorder can destroy the topological phase and obscure its detection. Our paper reports on devices with low enough disorder to pass the topological gap protocol, thereby demonstrating this phase of matter and paving the way for a new stable qubit. The published version of the paper shows data from additional devices measured after initial presentations of this breakthrough. We have added extensive tests of the TGP with simulations that further validate it. Moreover, we have developed a new measurement of the disorder level in our devices which demonstrates how we were able to accomplish this milestone and has seeded further improvements.

To learn more about this accomplishment, you can read the paper, analyze the data yourself in our interactive Jupyter notebooks, and watch this summary video.

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

The Microsoft roadmap to a quantum supercomputer

1. Create and control Majoranas: Achieved. 

2. Hardware-protected qubit: The hardware-protected qubit (historically referred to as a topological qubit) will have built-in error protection. This unique qubit will scale to support a reliable qubit, and will enable engineering of a quantum supercomputer because it will be: 

  • SmallEach of our hardware-protected qubits will be less than 10 microns on a side, so one million can fit in the area of the smart chip on a credit card, enabling a single-module machine of practical size. 
  • FastEach qubit operation will take less than one microsecond. This means problems can be solved in weeks rather than decades or centuries. 
  • ControllableOur qubits will be controlled by digital voltage pulses to ensure that a machine with millions of them doesn't have an excessive error rate or require unattainable input/output bandwidth.  

3. High quality hardware-protected qubits: Hardware-protected qubits that can be entangled and operated through braiding, reducing error rates with a series of quality advances. 

4. Multi-qubit system: A variety of quantum algorithms can be executed when multiple qubits operate together as a programmable Quantum Processing Unit (QPU) in a full stack quantum machine. 

5. Resilient quantum system: A quantum machine operating on reliable logical qubits, that demonstrates higher quality operations than the underlying physical qubits. This breakthrough enables the first rQOPS.

6. Quantum supercomputer: A quantum system capable of solving impactful problems even the most powerful classical supercomputers cannot with at least one million rQOPS with an error rate of at most 10-12 (one in a trillion).

We will reach Level 2, Resilient, of the Quantum Computing Implementation Levels at our fifth milestone and will achieve Level 3, Scale, with the sixth.

Join the journey

Today marks an important moment on our path to engineering a quantum supercomputer and ultimately empowering scientists to solve many of the hardest problems facing our planet. To learn more about how we're accelerating scientific discovery with Azure Quantum, check-out the virtual event with Satya Nadella, Microsoft Chairman and Chief Executive Officer, Jason Zander, Executive Vice President of Strategic Missions and Technologies, and Brad Smith, Vice Chair and President. To follow our journey and get the latest insider news on our hardware progress, register here.

The post Microsoft achieves first milestone towards a quantum supercomputer appeared first on Microsoft Azure Quantum Blog.

Microsoft Quantum researchers make algorithmic advances to tackle intractable problems in physics and materials science

In a paper recently published in PRX Quantum, Microsoft Azure Quantum researchers Guang Hao Low and Yuan Su, with collaborators Yu Tong and Minh Tran, have developed faster algorithms for quantum simulation. One of the most promising applications of quantum computers is to simulate systems governed by the laws of quantum mechanics. Efficient quantum simulations have the potential to revolutionize many fields, including materials science and chemistry, where problems with high industrial relevance can be intractable using today's supercomputers. Realizing this promise will require not only experimental progress, but also algorithmic advances that reduce the required quantum hardware resources. Doing so helps prepare our future scaled quantum computers to tackle challenging computational problems in the real world.

In their paper, Complexity of Implementing Trotter Steps, the authors improve upon pre-existing algorithms that rely on the so-called product formula methods, which date back to the 1990s when the first quantum simulation algorithm was proposed. The underlying idea is quite straightforward: we can simulate a general Hamiltonian system by simulating its component terms one at a time. In most situations, this only leads to an approximate quantum simulation, but the overall accuracy can be made arbitrarily high by repeating such Trotter steps sufficiently frequently.

Overcoming the complexity barrier

So, what are the resources needed to run this algorithm on a quantum computer? The algorithm repeats an elementary Trotter step multiple times, hence the total complexity is given by the number of repetitions multiplied by the cost per step, the latter of which is further determined by the number of terms in the Hamiltonian. Unfortunately, this is not very attractive for long-range quantum systems as the number of terms involved can be too big to be practical. Consider, for instance, a system with all-to-all interactions. If the size of the system is N, then the number of terms is N2, which also quantifies the asymptotic cost of Trotter steps. As a result, we are basically paying a quadratically higher cost to solve a simulation problem of just linear size. This issue becomes even worse for more general systems with many-body interactions. The question to ask then isis there a better implementation whose cost does not scale with the total number of Hamiltonian terms, overcoming this complexity barrier?

The answer to this question, as the paper shows, is twofold. If terms in the Hamiltonian are combined with arbitrary coefficients, then this high degree of freedom must be captured by any accurate quantum simulation, implying a cost proportional to the total term number. However, when the target Hamiltonian is structured with a lower degree of freedom, the paper provides a host of recursive techniques to lower the complexity of quantum simulation. In particular, this leads to an efficient quantum algorithm to simulate the electronic structure Hamiltonian, which models various important systems in materials science and quantum chemistry.

Recursive techniques have played an essential role in speeding up classical algorithms, such as those for sorting, searching, large integer and matrix multiplication, modular exponentiation, and Fourier transformations. Specifically, given a problem of size N, we do not aim to solve it directly; instead, we divide the target problem into M subproblems, each of which can be seen as an instance of the original one with size N/M and can be solved recursively using the same approach. This implies that the overall complexity C(N) satisfies the relation: C(N) = M C(N/M) + f(N), with f(N) denoting the additional cost to combine solutions of the subproblems. Mathematical analysis yields that, under certain realistic assumptions, the overall complexity C(N) has the same scaling as the combination cost f(N) up to a logarithmic factora powerful result sometimes known as "the master theorem." However, combining solutions can be much easier to handle than solving the full problem, so recursions essentially allow us to simplify the target problem almost for free!

Given the ubiquitous nature of recursions in classical computing, it is somewhat surprising that there were not many recursive quantum algorithms available. The paper from Low, Su, and collaborators develops recursive Trotter steps with a much lower implementation cost, suggesting the use of recursion as a promising new way to reduce the complexity of simulating many-body Hamiltonians.

Quantum solutions

The paper's result applies to a variety of long-range interacted Hamiltonians, including the Coulomb interaction between charged particles and the dipole-dipole interaction between molecules, both of which are ubiquitous in materials science and quantum chemistrya primary target application of quantum computers. In physics, impressive controls in recent experiments with trapped ions, Rydberg atoms, and ultracold atoms and polar molecules have enabled the possibility to study new phases of matter, which contributes to a growing interest in simulating such systems.

This research is part of the larger quantum computing effort at Microsoft. Microsoft has long been at the forefront of the quantum industry, serving as a pioneering force in the development of quantum algorithms tailored for simulating materials science and chemistry. This includes earlier efforts using quantum computers to elucidate reaction mechanisms in complex chemical systems targeting the open problem of biological nitrogen fixation in nitrogenase, as well as more recent quantum solutions to a carbon dioxide fixation catalyst with more than one order of magnitude savings in the computational cost.

The new results from the current work represent Microsoft's continuing progress to develop solutions for classically intractable problems on a future quantum machine with Azure Quantum.

Azure Quantum

Accelerating scientific discovery.

Learn more

The post Microsoft Quantum researchers make algorithmic advances to tackle intractable problems in physics and materials science appeared first on Microsoft Azure Quantum Blog.

Unlocking the power of Azure for Molecular Dynamics

More than ever, scientists need new technologies to help solve many of the most pressing issues facing society like reversing climate change, addressing food insecurity, and developing lifesaving therapeutics. Fundamentally, these problems are chemistry and materials science challenges, and some will require the transformational power of a scaled quantum computer. However, given the advancements of classical computing services in the cloud, scientists can start to make rapid progress towards solving these problems today.

Molecular dynamics (MD)the simulation of molecular interactionsis one computational problem that pushes the boundaries of what is possible with today's high-performance computing platforms. More powerful platforms for molecular dynamics simulations could unlock the development of new materials, new drugs, and more efficient batteries, so a team of Azure Quantum scientists recently set out to ask a fundamental question: what are Azure's capabilities for these types of simulations? Here's what we learned and how any scientist can use Azure to drive similar results today.

Visual representation of Molecular Dynamics simulation (in this example, Satellite Tobacco Mosaic Virus– PBID 7M3T).  
Images generated using MOL* 3D viewer on RCSB PDB 3D View (source) 

Azure provides the latest in high-performance computing

Molecular dynamics calculations pose unique high-speed communication challenges which require state-of-the-art computing capabilities. The Microsoft Azure cloud architecture helps researchers overcome these hurdles by allowing them to take advantage of the latest software and hardware developments required for chemistry and materials science research. By simplifying the provisioning of the necessary high-performance computing (HPC) resources, Azure helps scientists rapidly deploy and execute complex simulations of the structure and dynamics of macromolecules. This significantly accelerates chemical and materials innovation, for example enabling the creation of better pharmaceuticals by modeling biomolecules and their relevant properties at a faster pace.

Azure high-performance computing engineers have made significant progress in advancing the scale and networking speeds of the cloud platform. The latest Azure virtual machines use InfiniBand for low-latency communication across distributed nodes for differentiated scalability and performance gains. Our team has demonstrated excellent parallel efficiency and an increase of over 200 percent in benchmark simulations compared to previous virtual machines, particularly for larger simulations.

Customers are already taking advantage of Azure cloud HPC capabilities today. You can read more here.

"With Azure HPC, we've seen about a 50 percent speedup on some of our chemistry calculations that we runwhich is critical for R&D because every second counts, not just for getting the results quickly, but also in terms of cost and throughput." Glenn Jones, Research Manager at Johnson Matthey Technology Center.

Simulating complex molecular dynamics in Azure enables R&D acceleration

Molecular dynamics simulations translate atomic-scale forces and energies into molecular motion and are an important tool for both life sciences and materials science research. In life sciences, for example, molecular dynamics simulations are used to understand proteins, ligands, and their associated properties, which can be used to accelerate the discovery of better pharmaceuticals by modeling drug molecules and their relevant protein binding sites.

Molecular dynamics workloads stand to greatly benefit from specialized HPC systems whose architectures use both graphics processing units (GPUs) and central processing units (CPUs) with high-speed interconnects between them. Optimized MD simulations pose unique computational challenges related to time scale, sampling, and analysis, requiring powerful computing nodes with low-latency communicationsa task for which Azure is uniquely suited.

Time scale and sampling

Biologically relevant events occur across a wide range of timeframes from fractions of a second to decades. Consider an example event that occurs within 1/1,000th of a second. While this may seem short in real time, it represents a massive computational workload. To capture relevant chemical properties in these simulations, the system needs to compute the position and momentum of all the atoms very frequently. Often, these properties are calculated every 10-12 seconds for all atoms in the system. This means one would have to carry out at least a billion calculation steps to perform simulations on the same timescales as the event of interest.

Powerful computing resources in Azure make this computational problem more tractable. In addition to requiring state-of-the-art processorsincluding both CPUs and GPUsto accurately evaluate energies and forces, scalable molecular dynamics simulations need high-performance communication networks, because each calculation step requires messages to be passed between processors to communicate force and energy information. As the number of processors used in the simulation increases, so too does the need for faster communication, since simulation performance is extremely sensitive to the speed at which information can be passed between distributed nodes.

Analysis

The analysis of molecular dynamics simulation trajectories also requires high-performance computing methods. To understand the simulation results, scientists must apply compute-intensive methods to analyze large volumes of trajectory data. This analysis requires advanced statistical methods, high-performance computing platforms, and chemists' expert knowledge to interpret results.

InfiniBand acceleration is critical to molecular dynamics simulations

Microsoft is committed to making the most advanced computing resources available through Azure cloud services. The Azure cloud HPC platform allows researchers to take advantage of InfiniBand on HB Series virtual machines, which enables low-latency communication across distributed nodes for differentiated scalability and performance boosts.

We've seen strong results using Azure for molecular dynamics:

  • When simulating a benchmark model for Satellite Tobacco Mosaic Virus (STMV) with 20 million atoms, our latest high-bandwidth (HB) VMs (v3) outperformed previous versions of HB VMs by 218 percent to 251 percent, while also reducing the cost per nanosecond by a third (32 percent to 36 percent).
  • And while these simulations scale well on CPUs, they also scale on GPUsa configuration also supported by Azure. As we continue to bring new hardware and software to Azure HPC, further benchmark details will be published.
  • Overall, these benchmarks illustrate Azure's continued capability to reduce the time and cost of complex simulation workloads by utilizing state-of-the-art configurations, such as VMs with InfiniBand technology.

Azure’s unique capabilities extend far beyond the life sciences and can be applied to many other high-performance and computing-intensive workflows, including those in materials science and chemical physics.

The importance of AI and future impact of scaled quantum computing

Increasingly, we see great potential to accelerate chemistry and materials advances by integrating Azure's scaled HPC solutions with the speed of groundbreaking AI models tuned for scientific research. At Microsoft, we have been exploring a full breadth of AI capabilities for decades with our internal research teams. With the broad range of AI tools in Azure, innovators can design workflows which harness AI models to sort through massive data sets and subsequently use HPC-based simulation insights to narrow those results. These scenarios are only possible with the deep integration of AI and HPC in Azure today, which will also include the power of quantum at scale to help researchers improve model accuracy in the future.

Many of the world’s most pressing problems require advanced computing and the ability to simulate complex systems, because many physical interactions and natural processes are too difficult to study with classical computation at sufficient levels of accuracy. For this reason, scaled quantum computers must be part of the architecture of the future. Since quantum mechanics explains the behavior or matter and energy on the smallest possible scalethe scale of atoms and subatomic particlesquantum computers are inherently capable of understanding and predicting the complexities of nature, like those in chemical and materials science.

Scaled quantum computing will deliver breakthrough accuracy in modeling the forces and energies of such systems, allowing insights into spaces that are currently intractable to explore. Microsoft is focused on engineering a scaled quantum machine with these capabilities right now. However, with the world’s future possibly in the balance, we constantly ask the question: “How can we empower scientists to accelerate progress today?”

CLO17_azure_010

Azure Quantum

Accelerating scientific discovery.

Start accelerating innovation today with Azure

At Microsoft, we continue to invest more in our cloud HPC infrastructure so that innovators can accelerate the pace of research and discoveryboth within chemical and materials applications and beyond.

We're excited to see how the power of the Azure cloud will help you.

The post Unlocking the power of Azure for Molecular Dynamics appeared first on Microsoft Azure Quantum Blog.

Quantum Advantage: Hope and Hype

Efforts to research and develop quantum computing have advanced far enough in the past decade to allow us to separate hype from reality and truly understand where quantum computing will have the biggest impact. At Microsoft, we believe that it will take the world's collective genius to realize the full potential of quantum computing. Our goal is to both empower and help focus that genius in areas that show the greatest opportunity. To that end, Microsoft collaborated with Torsten Hfler, a leader in high-performance computing on new guidance for quantum application development that was just published: Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage.

A graph showing the difference in time needed to solve a problem on a classical computer vs a quantum computer. The classical computer has a significantly harsher curve which shows that it will take years to solve a problem as opposed to the quantum computer's curve which indicates it will take weeks.

The promise of quantum computing at scale is real. It will solve some of the hardest challenges facing humanity. However, it will not solve every challenge. There is an ever-growing list of applications being explored for quantum computing today ranging from logistics, cosmology and financial market prediction to carbon capture, big data analysis, biochemistry, and many more. It's clear that business, academic, and government leaders are turning to our industry with great hope. However, such optimism needs to be measured. The areas where quantum will have its biggest impact are coming more clearly into focus. The fundamentals of quantum physics govern which problems can benefit from the capabilities of quantum systems. Specifically, finding algorithms that present a significant advantage for quantum systems to justify their additional cost is the first hurdle. In addition, problems cannot rely on massive quantities of data, as getting data into and out of quantum systems presents a major bottleneck.

Quantum computers have different characteristics than classical systems, and so are useful for a different set of problems. Creating a practical quantum advantage, or quantum practicality, is relatively rare because of the specific characteristics that make a quantum computer effective.

A framework for quantum practicality

How can we determine which problems will benefit most from quantum computers?

To find out, we created a general model of quantum computer capabilities and shortcomings. We assume that researchers will be able to create a scaled quantum computer and compared its hypothetical performance to one of today's classical computers with a single state-of-the-art GPU. We assume the scaled quantum computer will have 10,000 fast, error-corrected logical qubits, or about one million physical qubits (a machine like the one Microsoft is engineering right now).

A major benefit of quantum systems is that they use the quantum foundations of nature, which results in much greater efficiency, and thus fewer operations, when applied to a specific subset of problems compared to a classical computer. However, as quantum computers take a longer time for each operation, a quantum algorithm must offer significant speedups to overcome the larger complexity of each operation. But what does that mean concretely?

For our analysis, we set a break-even point of two weeks, meaning a quantum computer should be able to perform better than a classical computer on problems that would take a quantum computer not more than two weeks to solve. Comparing the hypothetical future quantum computer to a single classical GPU available today, one finds that more than quadratic speedupand ideally super-polynomial speedupis needed. This is a significant finding since many proposed applications of quantum computing rely on the quadratic speedup of specific algorithms, such as Grover's algorithm.

In addition, again due to the larger complexity of each operation, quantum computers have limited bandwidth to run operations. In fact, even a scaled quantum computer will only be able to handle 1/10,000th of the bandwidth of specialized computer processors, such as the graphics processing units often used for machine learning. Because of the limited data sizes that quantum computers can process, this "quantum speedup" must happen with relatively simple forms of the problem of interest.

This combination of specific algorithms that exhibit a great enough quantum speedup and problems that can be expressed with a limited amount of data determines the boundaries of what current quantum systems are capable of and what scaled systems might achieve in the future. Applying these optimistic assumptions for quantum computers to different application domains gives us a way to determine where quantum computers will have the greatest impact in the future.

Computational chemistry and materials science will benefit tremendously from scaled quantum computing

One of the ideal applications for quantum computing is in simulating chemical interactions and materials on the quantum level.

Many problems facing the world today boil down to chemistry and material science problems. Better and more efficient electric vehicles rely on finding better battery chemistries. More effective and targeted cancer drugs rely on computational biochemistry. And materials that can last long enough to be useful, but then biodegrade quickly afterwards, rely on discoveries in these fields.

If quantum computers only benefited chemistry and material science, that would be enough. There is a reason that the major eras of innovationStone Age, Bronze Age, Iron Age, Silicon Ageare named for materials. Innovations in chemistry and material science are estimated to have an impact on 96 percent of all manufactured goods, which impact 100 percent of humanity.

Our framework for evaluating quantum practicality shows that chemistry and material science problems benefit from quantum speedup because activities like simulating the interactions of a single chemical can be represented by a limited number of interaction strengths between electrons in their orbitals. While many approximate calculations of their properties are routinely performed, the operations are exponentially complex on classical computers, but efficient on quantum computers, falling within our stated guidelines.

While scaled quantum computing is required to solve the hardest, most complex chemistry and materials science problems, progress can be made today with Azure high-performance computing. For example, Johnson Matthey and Microsoft Azure Quantum chemists have accelerated some quantum chemistry calculations in their search for hydrogen fuel cell catalysts by combining high-performance computing and specific quantum functions to reduce the turnaround time for their scaled workloads from six months to a week.

Where to focus innovation

Our research revealed that applications that rely on large datasets are better served by classical computing, because the bandwidth is too low on quantum systems to allow for applications such as searching databases or training machine-learning models on large datasets. These include applications such as drug design approaches and protein folding that rely on Grover’s algorithm as well as other simulations, such as weather and climate prediction, that rely on large systems of equations.

The limits of getting data into and out of quantum computers will limit the applicability of the systems to more compact problems. Anybody focused on building applications like these will most likely come to find the best results with Azure HPC and AI services. For quantum innovators who are focused on researching new quantum applications beyond chemical and materials science, we are eager to learn what you discover. To support your exploration, we provide the tools, like our Resource Estimator, as well as the skills and state-of-the-art hardware with Azure Quantum.

CLO19_azureKinectDK_017-1

Azure Quantum

Accelerating scientific discovery.

I'm motivated to see the benefits of quantum realized in my lifetimeit is a gift I want to give my kids and their kids. That desire animates my life's work. To make real progress though, as I said earlier, it will take our collective genius. The more focus scientists can apply to the areas that show the most promise, the sooner we'll be able to reap the benefits of quantum at scale. Therefore, I'm calling on this community to double down on learning about and researching applications related to computational chemistry and material science.

Accelerate scientific discovery with Azure Quantum

We invite anyone interested in accelerating their chemistry and materials science R&D to join us. To learn more, please check out the latest edition of the Microsoft Quantum Innovator Series: How can we solve quantum problems today and in the future?

If you are interested in meeting with our chemists and quantum architects to learn how our platform can accelerate your R&D, please reach out to the Azure Quantum team at QuantumInnovation@microsoft.com.

The post Quantum Advantage: Hope and Hype appeared first on Microsoft Azure Quantum Blog.

Microsoft and Johnson Matthey join forces to speed up hydrogen fuel cell innovation with Azure Quantum

Johnson Matthey and Microsoft Azure Quantum chemists have teamed up to drive new discoveries in sustainable energy. So far, Johnson Matthey has seen a two-fold acceleration in quantum chemistry calculations and we're just getting started. Both companies recognize that the discoveries needed to create a zero-carbon future will require significant breakthroughs in chemical and materials science, and are enthusiastic about the difference we can make in the world together.

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

Many of the hardest problems facing society, like reversing climate change and addressing food insecurity, are chemistry and materials science problems. Progress in this space will impact 96 percent of products and 100 percent of humanity. With continuous increases in classical computing capabilities, more quantum chemistry problems can be solved today using state-of-the-art cloud high-performance computing (HPC) services. However, the hardest chemistry and materials science problems will require a scaled quantum machine like the one Microsoft is engineering right now. Ultimately, the impactful applications of the future will be a hybrid of HPC, AI, and quantum. Our goal is to empower scientists to accelerate the next 250 years of chemistry discovery into 25 years. To start, we are learning and accelerating innovation alongside the pioneers in this space who are eager to make an impact today and prepare for quantum computing tomorrowpioneers like Johnson Matthey.

Johnson Matthey is a global leader in sustainable technologies, with over 200 years of commitment to innovation and technological breakthroughs in transport, energy, and chemical processing. For example, one in three cars on the road use a Johnson Matthey catalyst in the exhaust system, helping to reduce harmful emissions. Johnson Matthey is collaborating with Azure Quantum's chemists to develop new predictive modeling tools with the supercomputing capabilities of Azure HPC and refined workflows to accelerate chemical simulations, explore the potential of AI, and get quantum ready. The team has been able to accelerate certain quantum chemistry calculations and reduce the turnaround time for their scaled workloads from six months to a week. These capabilities are transforming the pace of Johnson Matthey's computational chemistry and materials science research and development (R&D).

An infographic depicting 15 separate molecular structures with the label "simulated alloys"

The search for better hydrogen fuel cell catalysts

One key area of Johnson Matthey's sustainable technologies R&D is finding better catalysts for hydrogen fuel cells that power trucks and buses. Electrocatalysts are materials that facilitate electrochemical reactions in fuel cells, helping to use hydrogen fuel to produce electricity. The most effective catalyst for hydrogen fuel cells today is platinum, which is rare and very expensive. The company has expanded its groundbreaking digital research in electrocatalysts to develop alternative alloy catalysts that use less platinum, to drive down the cost of fuel cell technology. This research requires significant computational resources to simulate complex atomic interactions within materials.

An infographic with a cylinder labeled "Hydrogen fuel cell" that has a part that is highlighted and labeled "alloy catalyst" on the left, on the right there is a bar graph that reads "cost"

Accelerating quantum chemistry discoveries

Johnson Matthey turned to Azure Quantum's chemists to help them explore new predictive modeling tools, native to Azure, that could accelerate nanoparticle simulations for discovering new catalysts. Using these tools with the supercomputing capabilities of Azure HPC, the team has substantially increased the throughput of the calculations needed to understand and design new electrocatalyst materials.

"With Azure HPC, we've seen about a 50 percent speedup on some of our chemistry calculations that we runwhich is critical for R&D because every second counts, not just for getting the results quickly, but also in terms of cost and throughput,"Glenn Jones, Research Manager, Johnson Matthey Technology Center.

Azure provides supercomputing scale and acceleration

These breakthrough acceleration capabilities are made possible through state-of-the-art Azure HPC hardware that is uniquely suited to chemistry and materials science workloads. The scale of the Azure HPC cloud allows massively parallel calculations for chemical and materials science discovery while InfiniBand-connected CPU/GPU architectures accelerate tightly coupled molecular simulation workloads. Workflow tools like AiiDA can help users harness the power of the cloud while managing its complexity and the scale of data from massively parallel calculations. Additionally, Azure deployment tools can be used to quickly stand up and manage computational chemistry and materials science environments in the cloud.

Preparing for future scaled quantum systems

Azure is not only providing supercomputing scale and simulation acceleration; it is also enabling Johnson Matthey to prepare for a quantum future, with hybrid workflows and codes to tackle even more ambitious problems and innovate faster when scaled quantum systems become available.

"Since we're using Azure, we are quantum-ready and on the path to tap into the power of quantum computers once available in the cloud through Azure Quantum. This will revolutionize how we conduct chemistry simulations,"Glenn Jones, Research Manager, Johnson Matthey Technology Center.

Learn more about chemical and materials science innovation

This collaboration with Johnson Matthey is part of Microsoft's dedicated efforts to accelerate scientific discovery by empowering chemists and other scientists with state-of-the-art cloud services tailored specifically to solve quantum problems. We welcome Johnson Matthey to our community of quantum thought leaders and invite other enterprises who want to explore the potential for Azure cloud services and quantum computing to innovate and dramatically improve their chemistry and materials development to join our next Microsoft Quantum Innovator series event.

Quantum chandelier

Microsoft Quantum Innovator Series

The path to quantum at scale.

If you are interested in meeting with our chemists and quantum architects to learn how our platform can accelerate your R&D, please reach out to the Azure Quantum team at QuantumInnovation@microsoft.com.

The post Microsoft and Johnson Matthey join forces to speed up hydrogen fuel cell innovation with Azure Quantum appeared first on Microsoft Azure Quantum Blog.

Microsoft is harnessing the power of the cloud to make the promise of quantum at scale a reality

8 March 2023 at 17:00

Today, Microsoft announced a significant quantum advancement and made our new Integrated Hybrid feature in Azure Quantum available to the public. This new functionality enables quantum and classical compute to integrate seamlessly together in the clouda first for our industry and an important step forward on our path to quantum at scale. Now, researchers can begin developing hybrid quantum applications with a mix of classical and quantum code together that run on one of today's quantum machines, Quantinuum, in Azure Quantum.

An infographic with a 3-part Venn Diagram; the top circle is labeled "Artificial Intelligence" with a brain icon, the lower left-hand circle is labeled 'Quantum Computing" with a connector icon, and the lower right-hand corner is labeled "HPC Automation" with a gear icon.

Classical computing has come a long way over the past century to be extraordinarily versatile and has transformed every industry. Even though it will continue to advance, there are certain problems it will never be able to solve. For computational problems that require closely modeling the phenomena of quantum physics, quantum computers will complement classical computers, creating a hybrid architecture that leverages the best characteristics of each design.

The quantum industry has long understood that quantum computing will always be a hybrid of classical and quantum compute. In fact, it was a key discussion point during this week's annual American Physical Society (APS) March Meeting in Las Vegas. However, our industry is just starting to grapple with, and design for, the future of hybrid classical and quantum compute at scale in the public cloud. At Microsoft, we are architecting a public cloud with Azure that enables scaled quantum computing to become a reality and then seamlessly delivers the profound benefits of it to our customers. In essence, AI, high-performance computing, and quantum are being co-designed as part of Azure, and this integration will have an impact in three important and surprising ways in the future.

1. The power of the cloud will unlock scaled quantum computing

Quantum at scale is required for scientists to help solve the hardest, most intractable problems our society faces, like reversing climate change and addressing food insecurity. Based on what we know todaylargely through our resource estimation work, a machine capable of solving such problems will require at least one million stable and controllable qubits. Microsoft is making progress on a machine capable of this scale every day.

A fundamental part of our plan to reach scale is to integrate our quantum machine alongside supercomputing classical machines in the cloud. A driving force of this design is the reality that the power of the cloud is required to run a fault-tolerant quantum machine. Achieving fault tolerance requires advanced error correction techniques, which basically means making logical qubits from physical qubits. While our unique topological qubit design will greatly enhance our machine's fault tolerance, advanced software and tremendous compute power will still be required to keep the machine stable.

In fact, to achieve fault tolerance, our quantum machine will be integrated with peta-scale classical compute in Azure and be able to handle bandwidths between quantum and classical that exceed 10-100 terabits per second. At every logical clock cycle of the quantum computer, we need this back and forth with classical computers to keep the quantum computer “alive” and yielding a reliable output solution. You may be surprised with this throughput requirement, but what fault tolerance means for quantum computing at scale is that a machine has to be able to perform a quintillion operations while making at most one error.

To put this number into perspective, imagine each operation was a grain of sand. Then for the machine to be fault tolerant, only a few grains of sand out of every grain of sand on earth could be faulty. Clearly, this type of scale is only enabled by the cloud, making Azure both a key enabler and differentiator of Microsoft's strategy to bring quantum at scale to the world.

2. The rise of classical compute capabilities in the cloud can help scientists solve quantum mechanical problems today

An incredible benefit of the rise of classical public cloud services is that scientists are able to achieve more at lower costs right now through the power of the cloud. For example, scientists from Microsoft, ETH Zurich, and the Pacific Northwest National Laboratory have recently presented a new automated workflow to leverage the scale of Azure to transform R&D processes in quantum chemistry and materials science. By optimizing the classical simulation code and re-factoring it to be cloud-native, the team achieved 10 times cost reduction for the simulation of a catalytic chemical reaction. These benefits will continue to grow as classical compute capabilities across the cloud advance even further.

Increasingly, we see great potential for high-performance computing and AI to accelerate advancements in chemistry and materials science. Near term, Azure will empower R&D teams with scale and speed. And long term, when we bring our scaled quantum machine to Azure, it will enable greater accuracy in modeling new pharmaceuticals, chemicals, and materials. The opportunity to unlock progress and growth is tremendous when you consider that chemistry and materials science impact 96 percent of manufactured goods and 100 percent of humanity. The key is to move to Azure now to both accelerate progress and future-proof your investments, as Azure is the home of Microsoft's incredible AI and high-performance computing capabilities today, and for our scaled quantum machine in the future.

3. A hyperscale cloud with AI, HPC, and quantum will create unprecedented opportunities for innovators

It is only when a quantum machine is designed alongside of, and integrated with, the AI supercomputers and scale of Azure, that we will be able to realize the greatest impacts from computing. With Azure, innovators will be able to design and execute a new class of impactful cloud applications that seamlessly bring together AI, HPC, and quantum at scale. For example, imagine the impactful applications in the future that will enable researchers with the scale of AI to sort through massive data sets, the insights from HPC to narrow down options, and the power of quantum at scale to improve model accuracy. These scenarios will only be possible in one application because of the seamless integration of HPC, AI, and quantum in Azure. Realizing this unprecedented opportunity requires advancing this deep integration in Azure today. As we bring HPC and AI together for advanced capabilities, we are also expanding the classical and quantum integration available right now.

Today, Microsoft took a significant step forward towards this vision by making our new Integrated Hybrid feature in Azure Quantum available to the public.

The ability to develop hybrid quantum applications with a mix of classical and quantum code together will empower today's quantum innovators to create a new class of algorithms. For example, now developers can build algorithms with adaptive phase estimation that can take advantage of performing classical computation, and iterate and adapt while physical qubits are coherent. Students can start learning algorithms without drawing circuits, and by leveraging high-level programming constructs such as branching based on qubits measurements (if statements), loops (for), calculations, and function calls. Additionally, scientists can now more easily explore ways to advance quantum error correction at the physical level on real hardware. Taken together, a new generation of quantum algorithms and protocols that could only be described in scientific papers can now run elegantly on quantum hardware in the cloud. A major milestone on the journey to scaled quantum computing has been achieved. 

Learn more about Azure Quantum

Azure is the place where all of this innovation comes together, ensuring your investments are future-proof. It's the place to be quantum-ready and quantum-safe, and as the cloud scales, so will your opportunity for impact. Please join Mark Russinovich, Chief Technology Officer and Technical Fellow of Azure at Microsoft, and me as we explore the future of the cloud in an upcoming Microsoft Quantum Innovation Series event.

A static image with headshots of Dr. Krysta Svore and Mark Russinovich above copy stating: “How will a hybrid of classical and quantum compute in the cloud enable a new class of applications?” The image describes the Microsoft Quantum Innovation Series event on May 16, 2023.

The post Microsoft is harnessing the power of the cloud to make the promise of quantum at scale a reality appeared first on Microsoft Azure Quantum Blog.

Quantum information science momentum accelerates in the Pacific Northwest

14 February 2023 at 18:00

Nearly 275 quantum computing enthusiasts convened in January for the Northwest Quantum Nexus Summit around a shared mission to accelerate quantum information science (QIS) research, co-innovation, and workforce development in the Pacific Northwest. The Northwest Quantum Nexus (NQN) coalition brought together 50 speakers and panelists from over 20 organizations for the two-day summit at the University of Washington (UW).

Collage of 4 photos- upper lefthand corner is Dr.Troyer and Student seated while in a discussion, upper righthand corner is 3 speakers seated while one holds a microphone, lower lefthand corner is the Northwest Quantum Nexus logo, and the lower right hand corner is an image of Dr. Krysta Svore speaking at a podium.

“The time is right for regions across the U.S. to ignite around quantum innovation. In the past year alone, the number of U.S. quantum professionals on LinkedIn grew 36% with 3 of the top 5 employers of quantum professionals in the U.S. posting double-digit growth of their talent pools*. The Pacific Northwest is ripe with candidates from the fields of study that most quantum professionals on LinkedIn have in their backgrounds: physics, computer science, computational science, math, and electrical and electronics engineering.”*

Nick DePorter, Senior Lead Manager, U.S. Public Policy and Economic Graph at LinkedIn.
*Source: LinkedIn Talent Insights

Microsoft and fellow NQN founding members Pacific Northwest National Laboratory and the University of Washington are committed to building connections and synergy across the Pacific Northwest to help the quantum community accelerate technical innovation, application development, and a quantum-ready workforce. The end game is to nurture a vibrant, regional quantum economy with national and global impact, and advance QIS technologies to their full potential toward solving some of society's most pressing issues. The summit was kicked off by UW Provost Mark Richards and Charles Tahan, Executive Office of the President, Office of Science and Technology Policy Assistant Director for QIS and Quantum Coordination Office Director.

New members across industry and academia

NQN welcomed five new members from industry and academia to take the stage at the summit: AWS, Boeing, IonQ, the University of Oregon, and Washington State University. The addition of AWS and Boeing brings two of the Pacific Northwest's largest tech leaders into the coalition. AWS discussed the customer verticals interested in quantum computing technologieslogistics, agriculture, machine learning, finance, energy, and pharma. Boeing shared their Disruptive Computing and Networks team's investments in quantum sensing, computing, and networks. IonQ generated excitement around their plans and vision for a 65,000-square-foot Seattle-area campus.

The University of Oregon described scientific programs in the over 50 members of the Oregon Center for Optical Molecular & Quantum Science, including the Oregon Ions program led by Assistant Professor David Allcock and Nobel Prize-winner Professor David Wineland. Washington State University Associate Professor Michael Forbes provided insight into key research areas such as NMR and imaging, analog quantum computing, quantum chaos, atom interferometry, and cryoelectronics.

Driving QIS accessibility and impact

The summit featured a Workforce Development session that brought together Scientific and Business audience tracks in showcases and discussions around workforce development, upskilling, and curriculum collaborations. Dr. Matthias Troyer, Technical Fellow and CVP at Azure Quantum, participated in an onstage conversation with Ewin Tang, a Ph.D. student in the UW theoretical computer science group. They discussed the most promising applications for scaled quantum computing, the hardware resources required to achieve practical quantum advantage, and what can be done by the market and policymakers to drive quantum impact and equity.

The post-event survey results underscored these themes, with attendees wanting more local and state government participation; more frequent summits, scientific forums, and facilitated networking opportunities; member expansion to cover additional organizations and adjacent geographies; and programs to attract entrepreneurs and investors.

In the intervening years since the first Summit, the NQN founding members highlighted progress and impact including Azure Quantum's demonstration of the formerly elusive physics needed to build scalable topological qubits; UW's attraction of $45 million USD in QISE funding to its ECE, Chemistry and Physics departments; and PNNL's new superconducting qubit testbed and HiSVSIM and Ensembled Quantum Computing (EQC) advancements.

A successful first NQN Hackathon 

This year's summit also featured a new NQN Hackathon hosted by Azure Quantum and IonQ where 75 students rolled up their sleeves to tackle hands-on problems. During the hackathon, teams self-organized to build solutions in response to challenges on IonQ's premium 23 algorithmic qubit (#AQ) trapped ion system, Aria. The hybrid event included virtual workshops before the in-person hackathon staffed by Azure Quantum and IonQ teams at the UW campus. Three winning teams were selected by an NQN Member judging panel and celebrated at the summit's close. Read what EyeQ had to say about their experience and check out Team I-Tummy's project.

Five college age students sit at a table with their laptops, smiling at the camera as they participate in the NQN Hackathon.

Unlocking the collective genius of the Pacific Northwest

The summit reinforced the message from keynote speaker, Krysta Svore, Vice President for Advanced Quantum Development at Microsoft: "The promise of quantum will only be realized by unlocking the collective genius, not by one company or institution alone." To connect with the Azure Quantum team, we invite you to join the conversation by registering for the Microsoft Quantum Innovator Seriesthe next webinar is on February 28, 2023 at 9:00 AM PT.

Last but not least, the summit's organizing committee extends its sincere and heartfelt thanks to all NQN session chairs, speakers, panelists, attendees, administrative and operational contributors, and the UW HUB venue. With 94 percent of post-event survey respondents indicating their intent to attend the next summit, we look forward to the momentum continuing.


*LinkedIn Talent Insights data is derived by aggregating profile data voluntarily submitted by LinkedIn members. As such, LinkedIn cannot guarantee the accuracy of LinkedIn Talent Insights data.

The post Quantum information science momentum accelerates in the Pacific Northwest appeared first on Microsoft Azure Quantum Blog.

Azure Quantum and Classiq collaborate to offer researchers and educators accelerated quantum algorithm design

8 February 2023 at 17:00

Today, Classiq and Microsoft launched a quantum research and education program that offers educational institutions access to Classiq's state-of-the-art Quantum software platform coupled with Azure Quantum cloud access to diverse quantum hardware.

This educational collaboration is an important building block in Microsoft's strategy to bring quantum at scale to the world. By empowering educators, researchers and students with leading quantum technologies on the Azure Quantum platform, we are helping accelerate the collective innovation needed to ultimately achieve impact with quantum at scale.

Classiq, which provides a leading platform for designing, analyzing, and executing quantum circuits, selected Azure Quantum to be its launch partner for its global academic program. Through integration with Azure Quantum, Classiq enables university professors, students, and researchers to speed up algorithm design on quantum computers, bypassing quantum assembly-level language so that users can focus on designing applications instead of gate-level code.

Two people sitting at computers. The computer screens have Classiq's synthesis engine software on them.

In addition to designing state-of-the-art circuits for near-term quantum devices, Classiq's synthesis engine allows researchers to easily explore large complex quantum circuits. The circuits, generated in QIR code, can then be sent to Azure Quantum's resource estimation service, providing practitioners and classrooms critical insight with respect to designing quantum applications for the fault-tolerant quantum computers of tomorrow.

Azure Quantum and Classiq collaborate to empower educators worldwide

Azure Quantum was the natural choice for Classiq to collaborate with as its academic program launch partner.

"The combined offering seamlessly pairs Classiq's easy-to-use software design platform with Azure Quantum's robust portfolio of NISQ hardware, resource estimator, and hybrid quantum computing features. The pairing enables quantum researchers and educators to focus on application development uninhibited by low-level code. This program reflects Classiq and Azure Quantum's deep and shared commitment to invest in global workforce development."

Nir Minerbi, Co-Founder and CEO of Classiq

The combined offering will empower professors worldwide, including those already in Classiq and Azure Quantum's networks across leading institutions, to teach courses and conduct research in all aspects of quantum computing.

“In order to make quantum computing a success, we need a strong interplay between hardware and software. Designing quantum software at the functional level and executing it on multiple QPUs will advance both quantum research and education. The collaboration between Classiq and Microsoft aims at exactly that and will pave the way towards a quantum computing ecosystem capable of solving some of the future's most important challenges."

Dr. Robert Wille, Technical University of Munich Professor and Chair for Design Automation of the Bavarian State Ministry for Science and Arts

The joint offering accelerates quantum software education by providing an advanced platform for automated quantum software design, with seamless execution on quantum hardware.

"The Classiq platform's ability to simplify complex quantum circuits through visualization and automation, in fact, mirrors Classiq's integration approach with Azure Quantum. Users access the best of Classiq's quantum circuit design software and Azure Quantum's cloud-based endpoints and capabilities through a single, simple-to-use Classiq interface and workspace."

Fabrice Frachon, Azure Quantum Principal Program Manager

Azure Quantum is ready to meet learners and practitioners wherever they are in their quantum journey. This new academic program supports application development-focused teams, with only nominal quantum software programming experience required. Because of its functional descriptive approach, Classiq makes it easy to upskill domain experts with little quantum experience and integrate them into high-performing quantum teams.

This global program expands the research and learning applications of Azure Quantum as well as the Azure Quantum for Educators portfolio of approaches, resources, and tools to facilitate the critical objective of skilling up a quantum-ready workforce.

Explore Azure Quantum and Classiq

Learn more about Classiq Academia.

The post Azure Quantum and Classiq collaborate to offer researchers and educators accelerated quantum algorithm design appeared first on Microsoft Azure Quantum Blog.

❌
❌