By Adrian Cockcroft, Partner & Analyst, OrionX Last year the UCIe chiplet standard had just been launched, and since then just about everyone has joined it, about 130 companies. The idea is that to build a complete CPU or GPU you don’t have to put it all on the same chip from the same vendor. Instead, you can use the UCIe standard to innovate in your own chiplet, then surround it with chiplets for memory and IO from a range of vendors.
One of the frustrations users sometimes have in using quantum computers on the cloud can be the long job queues and the delays a user may see from the time they submit a job to when the job is finally run and the results returned. This can certainly impact productivity. To help solve this issue, [...]
PASQAL, a quantum computing company, has announced a $90 million initiative over five years in Sherbrooke, Quebec. The project aims to manufacture and commercialize quantum computers, and conduct research and development in collaboration with academic and industrial partners within DistriQ, a quantum innovation zone. The goal is to establish Sherbrooke as a globally recognized quantum [...]
The practice of keeping time hinges on stable oscillations. In a grandfather clock, the length of a second is marked by a single swing of the pendulum. In a digital watch, the vibrations of a quartz crystal mark much smaller fractions of time. And in atomic clocks, the world’s state-of-the-art timekeepers, the oscillations of a laser beam stimulate atoms to vibrate at 9.2 billion times per second. These smallest, most stable divisions of time set the timing for today’s satellite communications, GPS systems, and financial markets.
A clock’s stability depends on the noise in its environment. A slight wind can throw a pendulum’s swing out of sync. And heat can disrupt the oscillations of atoms in an atomic clock. Eliminating such environmental effects can improve a clock’s precision. But only by so much.
A new MIT study finds that even if all noise from the outside world is eliminated, the stability of clocks, laser beams, and other oscillators would still be vulnerable to quantum mechanical effects. The precision of oscillators would ultimately be limited by quantum noise.
But in theory, there’s a way to push past this quantum limit. In their study, the researchers also show that by manipulating, or “squeezing,” the states that contribute to quantum noise, the stability of an oscillator could be improved, even past its quantum limit.
“What we’ve shown is, there’s actually a limit to how stable oscillators like lasers and clocks can be, that’s set not just by their environment, but by the fact that quantum mechanics forces them to shake around a little bit,” says Vivishek Sudhir, assistant professor of mechanical engineering at MIT. “Then, we’ve shown that there are ways you can even get around this quantum mechanical shaking. But you have to be more clever than just isolating the thing from its environment. You have to play with the quantum states themselves.”
The team is working on an experimental test of their theory. If they can demonstrate that they can manipulate the quantum states in an oscillating system, the researchers envision that clocks, lasers, and other oscillators could be tuned to super-quantum precision. These systems could then be used to track infinitesimally small differences in time, such as the fluctuations of a single qubit in a quantum computer or the presence of a dark matter particle flitting between detectors.
“We plan to demonstrate several instances of lasers with quantum-enhanced timekeeping ability over the next several years,” says Hudson Loughlin, a graduate student in MIT’s Department of Physics. “We hope that our recent theoretical developments and upcoming experiments will advance our fundamental ability to keep time accurately, and enable new revolutionary technologies.”
In studying the stability of oscillators, the researchers looked first to the laser — an optical oscillator that produces a wave-like beam of highly synchronized photons. The invention of the laser is largely credited to physicists Arthur Schawlow and Charles Townes, who coined the name from its descriptive acronym: light amplification by stimulated emission of radiation.
A laser’s design centers on a “lasing medium” — a collection of atoms, usually embedded in glass or crystals. In the earliest lasers, a flash tube surrounding the lasing medium would stimulate electrons in the atoms to jump up in energy. When the electrons relax back to lower energy, they give off some radiation in the form of a photon. Two mirrors, on either end of the lasing medium, reflect the emitted photon back into the atoms to stimulate more electrons, and produce more photons. One mirror, together with the lasing medium, acts as an “amplifier” to boost the production of photons, while the second mirror is partially transmissive and acts as a “coupler” to extract some photons out as a concentrated beam of laser light.
Since the invention of the laser, Schawlow and Townes put forth a hypothesis that a laser’s stability should be limited by quantum noise. Others have since tested their hypothesis by modeling the microscopic features of a laser. Through very specific calculations, they showed that indeed, imperceptible, quantum interactions among the laser’s photons and atoms could limit the stability of their oscillations.
“But this work had to do with extremely detailed, delicate calculations, such that the limit was understood, but only for a specific kind of laser,” Sudhir notes. “We wanted to enormously simplify this, to understand lasers and a wide range of oscillators."
Putting the “squeeze” on
Rather than focus on a laser’s physical intricacies, the team looked to simplify the problem.
“When an electrical engineer thinks of making an oscillator, they take an amplifier, and they feed the output of the amplifier into its own input,” Sudhir explains. “It’s like a snake eating its own tail. It’s an extremely liberating way of thinking. You don’t need to know the nitty gritty of a laser. Instead, you have an abstract picture, not just of a laser, but of all oscillators.”
In their study, the team drew up a simplified representation of a laser-like oscillator. Their model consists of an amplifier (such as a laser’s atoms), a delay line (for instance, the time it takes light to travel between a laser’s mirrors), and a coupler (such as a partially reflective mirror).
The team then wrote down the equations of physics that describe the system’s behavior, and carried out calculations to see where in the system quantum noise would arise.
“By abstracting this problem to a simple oscillator, we can pinpoint where quantum fluctuations come into the system, and they come in in two places: the amplifier and the coupler that allows us to get a signal out of the oscillator,” Loughlin says. “If we know those two things, we know what the quantum limit on that oscillator’s stability is.”
Sudhir says scientists can use the equations they lay out in their study to calculate the quantum limit in their own oscillators.
What’s more, the team showed that this quantum limit might be overcome, if quantum noise in one of the two sources could be “squeezed.” Quantum squeezing is the idea of minimizing quantum fluctuations in one aspect of a system at the expense of proportionally increasing fluctuations in another aspect. The effect is similar to squeezing air from one part of a balloon into another.
In the case of a laser, the team found that if quantum fluctuations in the coupler were squeezed, it could improve the precision, or the timing of oscillations, in the outgoing laser beam, even as noise in the laser’s power would increase as a result.
“When you find some quantum mechanical limit, there’s always some question of how malleable is that limit?” Sudhir says. “Is it really a hard stop, or is there still some juice you can extract by manipulating some quantum mechanics? In this case, we find that there is, which is a result that is applicable to a huge class of oscillators.”
This research is supported, in part, by the National Science Foundation.
Clocks, lasers, and other oscillators could be tuned to super-quantum precision, allowing researchers to track infinitesimally small differences in time, according to a new MIT study.
Rice University’s Pengcheng Dai, Randall Hulet, Douglas Natelson, Han Pu, Ming Yi and Boris Yakobson attended the Vannevar Bush Faculty Fellowship (VBFF) Symposium on Extreme Quantum Materials. The event showcased research on the physics of strongly correlated quantum materials and marked the start of a five-year research programme by Qimiao Si, who won a 2023 […]
Diagram Showing How Q-CTRL Embedded Software is Integrated with IBM Quantum Q-CTRL and IBM have announced that Q-CTRLs error suppression technology, named Q-CTRL Embedded, has been integrated into the IBM Qiskit runtime system. This feature is currently only available for users on the Pay-As-You-Go plan. There is no additional cost for those users who utilize [...]
Scientists have developed a new tool for the measurement of entanglement in many-body systems and demonstrated it in experiments. The method enables the study of previously inaccessible physical phenomena and could contribute to a better understanding of quantum materials.
By Dr Chris Mansell, Senior Scientific Writer at Terra Quantum Shown below are summaries of a few interesting research papers in quantum technology that we have seen over the past month. Hardware Title: Electron charge qubit with 0.1 millisecond coherence timeOrganizations: Argonne National Laboratory; University of Chicago; Lawrence Berkeley National Laboratory; The NSF AI Institute [...]
Matt Versaggi, a senior leader in the AI space, is interviewed by Yuval Boger. Matt discusses his grassroots approach to integrating quantum technology in healthcare, emphasizing the need for educational programs and patenting initiatives. He also shares insights into the role of quantum business strategists, who can offer cost-effective, timely advice to organizations. Matt also [...]
Quantum machine learning has the potential for a transformative impact across industry sectors and in particular in finance. In our work we look at the problem of hedging where deep reinforcement learning offers a powerful framework for real markets. We develop quantum reinforcement learning methods based on policy-search and distributional actor-critic algorithms that use quantum neural network architectures with orthogonal and compound layers for the policy and value functions. We prove that the quantum neural networks we use are trainable, and we perform extensive simulations that show that quantum models can reduce the number of trainable parameters while achieving comparable performance and that the distributional approach obtains better performance than other standard approaches, both classical and quantum. We successfully implement the proposed models on a trapped-ion quantum processor, utilizing circuits with up to $16$ qubits, and observe performance that agrees well with noiseless simulation. Our quantum techniques are general and can be applied to other reinforcement learning problems beyond hedging.
Topological quantum frequency comb is a burgeoning topic that combines topological phases and quantum systems, which inspires many intriguing sparks in topological quantum optics. Producing quantum frequency combs in valley photonic crystal topological resonators can introduce the robustness to quantum states in integrated photonic devices.
Abstract
Recent advances in manipulating topological phases in quantum systems have promised integrated quantum devices with conspicuous functionalities, such as robustness against fabrication defects. At the same time, the introduction of quantum frequency combs enables extreme expansion of quantum resources. Here, it theoretically propose the generation of high-dimensional entangled quantum frequency combs via four-wave mixing processes in the valley-Hall topological resonators. Specifically, it demonstrates two irregular photonic crystal resonators supporting the whispering-gallery resonator modes that lead to coherent quantum frequency combs at telecommunication wavelengths. By using the Schmidt decomposition, It shows that the quantum frequency combs are frequency entangled, and it also concludes that the effective dimensions of quantum frequency combs in these two resonators are at least seven and six, respectively. Moreover, these quantum frequency combs inherit the topological protection of valley kink states, showing robustness against defects in the resonators. The topological quantum frequency combs have shown intriguing potentiality in the generation and control of topologically protected high-dimensional quantum states in integrated photonic crystal platforms.
Introducing a groundbreaking achievement in the field of quantum optics and communication, the research unveils a high-performance telecom-wavelength biphoton source from a hot 87Rb atomic vapor cell. With its remarkable advantages of compatibility with existing telecom networks, seamless long-distance communication, exceptional efficiency, and minimal noise, the work paves the way for the realization of optical-fiber-based quantum communications and networks.
Abstract
Telecom-band quantum light sources are critical to the development of long-distance quantum communication technologies. A high-performance telecom-wavelength biphoton source from a hot 87Rb atomic vapor cell is reported. Time-correlated biphotons are generated from the cascade-type 5S1/2–5P3/2–4D5/2 transition of 87Rb via a spontaneous four-wave mixing process. The maximum value gSI(2)(τ)$g_{{\mathrm{SI}}}^{( 2 )}( \tau )$ of biphoton cross-correlation to be 44(3) is achieved, under the condition of a high optical depth of 112(3), including two-photon absorption, with a spectral width of approximately 300 MHz. The coincidence count rate of biphoton is estimated to be of the order of 38 000 cps mW−1. It is believed that the telecom-wavelength biphoton source from an atomic vapor cell can be applied in long-distance quantum networks and practical quantum repeaters based on atom–photon interactions.
A capacitively-coupled coplanar waveguide microwave resonator is fabricated and characterized, revealing an unconventional reduction of loss with decreasing temperature below 50 mK at low photon numbers. This anomalous behavior is attributed to the response bandwidth of a single two-level system (TLS) dropping below the TLS-resonance detuning at low temperatures, reducing the intrinsic loss of the resonator.
Abstract
Superconducting resonators are widely used in many applications such as qubit readout for quantum computing, and kinetic inductance detectors. These resonators are susceptible to numerous loss and noise mechanisms, especially the dissipation due to two-level systems (TLS) which become the dominant source of loss in the few-photon and low temperature regime. In this study, capacitively-coupled aluminum half-wavelength coplanar waveguide resonators are investigated. Surprisingly, the loss of the resonators is observed to decrease with a lowering temperature at low excitation powers and temperatures below the TLS saturation. This behavior is attributed to the reduction of the TLS resonant response bandwidth with decreasing temperature and power to below the detuning between the TLS and the resonant photon frequency in a discrete ensemble of TLS. When response bandwidths of TLS are smaller than their detunings from the resonance, the resonant response and thus the loss is reduced. At higher excitation powers, the loss follows a logarithmic power dependence, consistent with predictions from the generalized tunneling model (GTM). A model combining the discrete TLS ensemble with the GTM is proposed and matches the temperature and power dependence of the measured internal loss of the resonator with reasonable parameters.
In this interview at the SC23 conference, we caught up with Milind Pandit, AI Technical Evangelist at Intel, who discussed the company’s “Bringing AI Everywhere” mission and its significance for the HPC community. AI was everywhere at the Supercomputing event in Denver, and in his comments, Pandit takes up the theme of how large language […]
Today at AWS re:Invent, Amazon Web Services announced the next generation of two AWS-designed chip families — AWS Graviton4 and AWS Trainium2 — that the company said delivers advancements in price performance and energy efficiency for such workloads as machine learning (ML) training and generative artificial intelligence (AI) applications.
HPC industry analyst firm Hyperion Research has announced the recipients of the 19th round of HPC Innovation Excellence Awards. The 2023 winners are: FedData Technology Solutions, Boston University, McMaster University and HPE, and HLRS and WIKKI GmbH.
In June 2023, we offered how quantum computing must graduate through three implementation levels (quantum computing implementation levels QCILs) to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale. All quantum computing technologies today are at Level 1. And while NISQ machines are all around us, they do not offer practical quantum advantage. True utility will only come from orchestrating resilient computation across a sea of logical qubits something that, to the best of our current knowledge, can only be achieved with error correction and fault tolerance. Fault tolerance will be a necessary and essential ingredient in any quantum supercomputer, and for any practical quantum advantage.The first step toward the goal of reaching practical quantum advantage is to demonstrate resilient computation on a logical qubit. However, just one logical qubit will not be enough; ultimately the goal is to show that quantum error correction helps non-trivial computation instead of hindering, and an important element of this non-triviality is the interaction between qubits and their entanglement. Demonstrating an error corrected resilient computation, initially on two logical qubits, that outperforms the same computation on physical qubits, will mark the first demonstration of a resilient computation in our field's history.The race is on to demonstrate a resilient logical qubit but what is a meaningful demonstration? Before our industry can declare a victory on reaching Level 2 for a given quantum computing hardware and claim the demonstration of a resilient logical qubit, it's important to align on what this means.Criteria of Level 2: resilient quantum computation
How should we define a logical qubit? The most meaningful definition of a logical qubit hinges on what one can do with that qubit demonstrating a qubit that can only remain idle, that is, be preserved in a memory, is not meaningful if one cannot demonstrate non-trivial operations as well. Therefore, it makes sense to define a logical qubit such that it allows some non-trivial, encoded computation to be performed on it.Distinct hardware comes with distinct native operations. This presents a significant challenge in formally defining a logical qubit; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that mark the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a "logical qubit".Entrance criteria to Level 2Exiting Level 1 NISQ computing and entering Level 2 Resilient quantum computing is achieved when fewer errors are observed on the output of a logical circuit using quantum error correction than on the same analogous physical circuit without error correction.We argue that a demonstration of the resilient level of quantum computation must satisfy the following criteria:
involve at least 2 logical qubitsdemonstrate convincingly large separation (ideally 5-10x) of logical error rate < physical error rate on the non-trivial logical circuitcorrect all individual circuit faults ("fault distance" must be at least 3)implement a non-trivial logical operation that generates entanglement between logical qubitsThe justification for these is self-evident being able to correct errors is how resiliency is achieved and demonstrating an improvement over physical error rates is precisely what we mean by resiliency but we feel that it is worth emphasizing the requirement for logical entanglement. Our goal is to achieve advantage with a quantum computer, and an important ingredient to advantage is entanglement across at least 2 logical qubits.The distinction between Resilient Level and the Scale Level is also important to emphasize a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, we find it important to allow some forms of post-selection, with the following requirements
Post-selection acceptance criteria must be computable in real-time (but may be implemented in post-processing for demonstration);scalable post-selection (rejection rate can be made vanishingly small)if post-selection is not scalable, it must at least correct all low weight errors in the computations (with the exception of state-preparation, since post-selection in state-preparation is scalable);In other words, post-selection must be either fully compatible with scalability, or it must still allow for demonstration of the key ingredients of error correction, not simply error detection.Measuring progress across Level 2Once a quantum computing hardware has entered the Resilient Level, it is important to also be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale, as the requirements to reach Scale include achieving upwards of 1000 logical qubits with logical error rates better than 10-12 and mega-rQOPS and more.Progress toward scale may be measured along four axes: universality, scalability, fidelity, composability. We offer the following ideas to the community on how to measure progress across these four axes, such that we as a community can benchmark progress in the resilient level of utility scale quantum computation:
Universality: universality typically splits into two components: Clifford group gates and non-Clifford group gates. Does one have a set of high-fidelity Clifford-complete logical operations? Does one have a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware may employ different strategies.Scalability: At its core, resource requirement for advantage must be reasonable (i.e., small fraction of Earth's resources or a person's lifetime). More technically, quantum resource overhead required should scale polynomially with target logical error rate of any quantum algorithm. Note also that some systems may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing distance) may be difficult.Fidelity: Logical error rates of all operations improve with code size (sub-threshold). More strictly, one would like to see logical error rate is better than physical error rate (sub-pseudothreshold). Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or with other operational tasks such as Bell inequality violations and self-testing protocols.Composability: Composable gadgets for all logical operations. Criteria to advance from Level 2 to Level 3, a Quantum SupercomputerThe exit of the resilient level of logical computation, and the achievement of the world's first quantum supercomputer, will be marked by large depth computations on high fidelity circuits involving upwards of hundreds of logical qubits. For example, a logical circuit on ~100+ logical qubits with a universal set of composable logical operations hitting a fidelity of ~10e-8 or better. Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits with logical error rate of 10^-12 and a mega-rQOPS. Performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).Conclusion
It's no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. If you have thoughts on these criteria for a logical qubit, or how to measure progress, we'd love to hear from you.
Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.
The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.
Key highlights of the expanded collaboration include:
Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
Hosting NVIDIA DGX Cloud on AWS:
Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
Project Ceiba supercomputer:
Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
Introduction of new Amazon EC2 instances:
AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
Software innovations:
NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.
This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.
Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.
The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.