What is a logical qubit?
In June 2023, we offered how quantum computing must graduate through three implementation levels (quantum computing implementation levels QCILs) to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale. All quantum computing technologies today are at Level 1. And while NISQ machines are all around us, they do not offer practical quantum advantage. True utility will only come from orchestrating resilient computation across a sea of logical qubits something that, to the best of our current knowledge, can only be achieved with error correction and fault tolerance. Fault tolerance will be a necessary and essential ingredient in any quantum supercomputer, and for any practical quantum advantage.The first step toward the goal of reaching practical quantum advantage is to demonstrate resilient computation on a logical qubit. However, just one logical qubit will not be enough; ultimately the goal is to show that quantum error correction helps non-trivial computation instead of hindering, and an important element of this non-triviality is the interaction between qubits and their entanglement. Demonstrating an error corrected resilient computation, initially on two logical qubits, that outperforms the same computation on physical qubits, will mark the first demonstration of a resilient computation in our field's history.The race is on to demonstrate a resilient logical qubit but what is a meaningful demonstration? Before our industry can declare a victory on reaching Level 2 for a given quantum computing hardware and claim the demonstration of a resilient logical qubit, it's important to align on what this means.Criteria of Level 2: resilient quantum computation
How should we define a logical qubit? The most meaningful definition of a logical qubit hinges on what one can do with that qubit demonstrating a qubit that can only remain idle, that is, be preserved in a memory, is not meaningful if one cannot demonstrate non-trivial operations as well. Therefore, it makes sense to define a logical qubit such that it allows some non-trivial, encoded computation to be performed on it.Distinct hardware comes with distinct native operations. This presents a significant challenge in formally defining a logical qubit; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that mark the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a "logical qubit".Entrance criteria to Level 2Exiting Level 1 NISQ computing and entering Level 2 Resilient quantum computing is achieved when fewer errors are observed on the output of a logical circuit using quantum error correction than on the same analogous physical circuit without error correction.We argue that a demonstration of the resilient level of quantum computation must satisfy the following criteria:
involve at least 2 logical qubitsdemonstrate convincingly large separation (ideally 5-10x) of logical error rate < physical error rate on the non-trivial logical circuitcorrect all individual circuit faults ("fault distance" must be at least 3)implement a non-trivial logical operation that generates entanglement between logical qubitsThe justification for these is self-evident being able to correct errors is how resiliency is achieved and demonstrating an improvement over physical error rates is precisely what we mean by resiliency but we feel that it is worth emphasizing the requirement for logical entanglement. Our goal is to achieve advantage with a quantum computer, and an important ingredient to advantage is entanglement across at least 2 logical qubits.The distinction between Resilient Level and the Scale Level is also important to emphasize a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, we find it important to allow some forms of post-selection, with the following requirements
Post-selection acceptance criteria must be computable in real-time (but may be implemented in post-processing for demonstration);scalable post-selection (rejection rate can be made vanishingly small)if post-selection is not scalable, it must at least correct all low weight errors in the computations (with the exception of state-preparation, since post-selection in state-preparation is scalable);In other words, post-selection must be either fully compatible with scalability, or it must still allow for demonstration of the key ingredients of error correction, not simply error detection.Measuring progress across Level 2Once a quantum computing hardware has entered the Resilient Level, it is important to also be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale, as the requirements to reach Scale include achieving upwards of 1000 logical qubits with logical error rates better than 10-12 and mega-rQOPS and more.Progress toward scale may be measured along four axes: universality, scalability, fidelity, composability. We offer the following ideas to the community on how to measure progress across these four axes, such that we as a community can benchmark progress in the resilient level of utility scale quantum computation:
Universality: universality typically splits into two components: Clifford group gates and non-Clifford group gates. Does one have a set of high-fidelity Clifford-complete logical operations? Does one have a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware may employ different strategies.Scalability: At its core, resource requirement for advantage must be reasonable (i.e., small fraction of Earth's resources or a person's lifetime). More technically, quantum resource overhead required should scale polynomially with target logical error rate of any quantum algorithm. Note also that some systems may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing distance) may be difficult.Fidelity: Logical error rates of all operations improve with code size (sub-threshold). More strictly, one would like to see logical error rate is better than physical error rate (sub-pseudothreshold). Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or with other operational tasks such as Bell inequality violations and self-testing protocols.Composability: Composable gadgets for all logical operations. Criteria to advance from Level 2 to Level 3, a Quantum SupercomputerThe exit of the resilient level of logical computation, and the achievement of the world's first quantum supercomputer, will be marked by large depth computations on high fidelity circuits involving upwards of hundreds of logical qubits. For example, a logical circuit on ~100+ logical qubits with a universal set of composable logical operations hitting a fidelity of ~10e-8 or better. Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits with logical error rate of 10^-12 and a mega-rQOPS. Performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).Conclusion
It's no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. If you have thoughts on these criteria for a logical qubit, or how to measure progress, we'd love to hear from you.
The post Defining logical qubits: criteria for Resilient Quantum Computation appeared first on Microsoft Azure Quantum Blog.