Quantum machine learning has the potential for a transformative impact across industry sectors and in particular in finance. In our work we look at the problem of hedging where deep reinforcement learning offers a powerful framework for real markets. We develop quantum reinforcement learning methods based on policy-search and distributional actor-critic algorithms that use quantum neural network architectures with orthogonal and compound layers for the policy and value functions. We prove that the quantum neural networks we use are trainable, and we perform extensive simulations that show that quantum models can reduce the number of trainable parameters while achieving comparable performance and that the distributional approach obtains better performance than other standard approaches, both classical and quantum. We successfully implement the proposed models on a trapped-ion quantum processor, utilizing circuits with up to $16$ qubits, and observe performance that agrees well with noiseless simulation. Our quantum techniques are general and can be applied to other reinforcement learning problems beyond hedging.

Topological quantum frequency comb is a burgeoning topic that combines topological phases and quantum systems, which inspires many intriguing sparks in topological quantum optics. Producing quantum frequency combs in valley photonic crystal topological resonators can introduce the robustness to quantum states in integrated photonic devices.

Abstract

Recent advances in manipulating topological phases in quantum systems have promised integrated quantum devices with conspicuous functionalities, such as robustness against fabrication defects. At the same time, the introduction of quantum frequency combs enables extreme expansion of quantum resources. Here, it theoretically propose the generation of high-dimensional entangled quantum frequency combs via four-wave mixing processes in the valley-Hall topological resonators. Specifically, it demonstrates two irregular photonic crystal resonators supporting the whispering-gallery resonator modes that lead to coherent quantum frequency combs at telecommunication wavelengths. By using the Schmidt decomposition, It shows that the quantum frequency combs are frequency entangled, and it also concludes that the effective dimensions of quantum frequency combs in these two resonators are at least seven and six, respectively. Moreover, these quantum frequency combs inherit the topological protection of valley kink states, showing robustness against defects in the resonators. The topological quantum frequency combs have shown intriguing potentiality in the generation and control of topologically protected high-dimensional quantum states in integrated photonic crystal platforms.

Introducing a groundbreaking achievement in the field of quantum optics and communication, the research unveils a high-performance telecom-wavelength biphoton source from a hot ^{87}Rb atomic vapor cell. With its remarkable advantages of compatibility with existing telecom networks, seamless long-distance communication, exceptional efficiency, and minimal noise, the work paves the way for the realization of optical-fiber-based quantum communications and networks.

Abstract

Telecom-band quantum light sources are critical to the development of long-distance quantum communication technologies. A high-performance telecom-wavelength biphoton source from a hot ^{87}Rb atomic vapor cell is reported. Time-correlated biphotons are generated from the cascade-type 5S_{1/2}–5P_{3/2}–4D_{5/2} transition of ^{87}Rb via a spontaneous four-wave mixing process. The maximum value gSI(2)(τ)$g_{{\mathrm{SI}}}^{( 2 )}( \tau )$ of biphoton cross-correlation to be 44(3) is achieved, under the condition of a high optical depth of 112(3), including two-photon absorption, with a spectral width of approximately 300 MHz. The coincidence count rate of biphoton is estimated to be of the order of 38 000 cps mW^{−1}. It is believed that the telecom-wavelength biphoton source from an atomic vapor cell can be applied in long-distance quantum networks and practical quantum repeaters based on atom–photon interactions.

A capacitively-coupled coplanar waveguide microwave resonator is fabricated and characterized, revealing an unconventional reduction of loss with decreasing temperature below 50 mK at low photon numbers. This anomalous behavior is attributed to the response bandwidth of a single two-level system (TLS) dropping below the TLS-resonance detuning at low temperatures, reducing the intrinsic loss of the resonator.

Abstract

Superconducting resonators are widely used in many applications such as qubit readout for quantum computing, and kinetic inductance detectors. These resonators are susceptible to numerous loss and noise mechanisms, especially the dissipation due to two-level systems (TLS) which become the dominant source of loss in the few-photon and low temperature regime. In this study, capacitively-coupled aluminum half-wavelength coplanar waveguide resonators are investigated. Surprisingly, the loss of the resonators is observed to decrease with a lowering temperature at low excitation powers and temperatures below the TLS saturation. This behavior is attributed to the reduction of the TLS resonant response bandwidth with decreasing temperature and power to below the detuning between the TLS and the resonant photon frequency in a discrete ensemble of TLS. When response bandwidths of TLS are smaller than their detunings from the resonance, the resonant response and thus the loss is reduced. At higher excitation powers, the loss follows a logarithmic power dependence, consistent with predictions from the generalized tunneling model (GTM). A model combining the discrete TLS ensemble with the GTM is proposed and matches the temperature and power dependence of the measured internal loss of the resonator with reasonable parameters.

We study the generation of two-qudit entangling quantum logic gates using two techniques in quantum optimal control. We take advantage of both continuous, Lie algebraic control and digital, Lie group control. In both cases, the key is access to a time-dependent Hamiltonian, which can generate an arbitrary unitary matrix in the group SU(${d}^{2}$). We find efficient protocols for creating high-fidelity entangling gates. As a test of our theory, we study the case of qudits robustly encoded in nuclear spins of alkaline earth atoms and manipulated with magnetic and optical fields, with entangling interactions arising from the well-known Rydberg blockade. We applied this in a case study based on a $d=10$ dimensional qudit encoded in the $I=9/2$ nuclear spin in ${}^{87}\mathrm{Sr}$, controlled through a combination of nuclear spin resonance, a tensor ac-Stark shift, and Rydberg dressing, which allows us to generate an arbitrary symmetric entangling two-qudit gate, such as CPhase. Our techniques can be used to implement qudit entangling gates for any $2\le d\le 10$ encoded in the nuclear spin. We also studied how decoherence due to the finite lifetime of the Rydberg states affects the creation of the CPhase gate and found, through numerical optimization, a fidelity of $0.9985$, $0.9980$, $0.9942$, and $0.9800$ for $d=2$, $d=3$, $d=5$, and $d=7$, respectively. This provides a powerful platform to explore the various applications of quantum information processing of qudits, including metrological enhancement with qudits, quantum simulation, universal quantum computation, and quantum error correction.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Projective measurement, a basic operation in quantum mechanics, can induce seemingly nonlocal effects. In this work, we analyze such effects in many-body systems by studying the nonequilibrium dynamics of weakly monitored quantum circuits, focusing on entanglement generation and information spreading. We find that, due to measurements, the entanglement dynamics in monitored circuits is indeed “faster” than that of unitary ones in several ways. Specifically, we find that a pair of well-separated regions can become entangled in a time scale ${\ell}^{2/3}$, sublinear in their distance $\ell $. For the case of Clifford monitored circuits, this originates from superballistically growing stabilizer generators of the evolving state. In addition, we find initially local information can spread superlinearly as ${t}^{3/2}$. Furthermore, by viewing the dynamics as a dynamical encoding process, we show that the superlinear growing length scale relates to an encoding time that is sublinear in system size. To quantify the information dynamics, we develop a formalism generalizing operator spreading to nonunitary dynamics, which is of independent interest.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

This is a brief review on the principle, categories, and applications of quantum metrology. Special attention is paid to different quantum resources that can bring quantum superiority in enhancing sensitivity. Then, the paper reviews the no-go theorem of noisy quantum metrology and its active control under different noise-induced decoherence situations.

Abstract

Quantum metrology pursues the physical realization of higher-precision measurements to physical quantities than the classically achievable limit by exploiting quantum features, such as entanglement and squeezing, as resources. It has potential applications in developing next-generation frequency standards, magnetometers, radar, and navigation. However, the ubiquitous decoherence in the quantum world degrades the quantum resources and forces the precision back to or even worse than the classical limit, which is called the no-go theorem of noisy quantum metrology and greatly hinders its applications. Therefore, how to realize the promised performance of quantum metrology in realistic noisy situations attracts much attention in recent years. The principle, categories, and applications of quantum metrology are reviewed. Special attention is paid to different quantum resources that can bring quantum superiority in enhancing sensitivity. Then, the no-go theorem of noisy quantum metrology and its active control under different kinds of noise-induced decoherence situations are introduced.

This paper tackles light detection and ranging challenges, delving into laser rangefinder principles, optical modulation, and advancements in laser tech. The promising photonic crystal surface emitting laser is a standout. Progress in scanning stability and angles is discussed as well, along with advancements in metasurface technology, enhancing beam deflection and field of view.

Abstract

Light detection and ranging (LiDAR) sensor is widely recognized as a critical component for accurate perception. However, there are a host of challenges that impede their performance, including low spatial resolution, high costs, large size, low reliability, and susceptibility to interference. It is challenging to overcome these issues using a single LiDAR module, necessitating the need for a review of current LiDAR technologies. The paper commences by introducing the fundamental principles of various laser rangefinders and discussing the optical modulation technologies used to prevent interference and ghost images. Next, the paper delves into the latest developments in laser technology, with a focus on enhancing the switching rate, compliance with eye safety regulations, miniaturization, and improving stability. One highly promising innovation is the photonic crystal surface emitting laser (PCSEL), a novel light source that boasts high-speed, small divergence angles, and high-power output. Finally, the paper discusses the advancements made in non-solid-state scanning and solid-state scanning, such as improving stability, increasing scanning angles, and optimizing the manufacturing of mechanical and micro-electromechanical systems (MEMS). Additionally, the paper highlights the recent advancements in nanotechnology, specifically metasurface technology, which offers superior capabilities such as beam deflection, enhanced field-of-view (FOV), and dynamic modulation.

There is vast opportunity for nanoscale innovation to transform the world in positive ways — expressed MIT.nano Director Vladimir Bulović as he posed two questions to attendees at the start of the inaugural Nano Summit: “Where are we heading? And what is the next big thing we can develop?”

“The answer to that puts into perspective our main purpose — and that is to change the world,” Bulović, the Fariborz Maseeh Professor of Emerging Technologies, told an audience of more than 325 in-person and 150 virtual participants gathered for an exploration of nano-related research at MIT and a celebration of MIT.nano’s fifth anniversary.

Over a decade ago, MIT embarked on a massive project for the ultra-small — building an advanced facility to support research at the nanoscale. Construction of MIT.nano in the heart of MIT’s campus, a process compared to assembling a ship in a bottle, began in 2015, and the facility launched in October 2018.

Fast forward five years: MIT.nano now contains nearly 170 tools and instruments serving more than 1,200 trained researchers. These individuals come from over 300 principal investigator labs, representing more than 50 MIT departments, labs, and centers. The facility also serves external users from industry, other academic institutions, and over 130 startup and multinational companies.

A cross section of these faculty and researchers joined industry partners and MIT community members to kick off the first Nano Summit, which is expected to become an annual flagship event for MIT.nano and its industry consortium. Held on Oct. 24, the inaugural conference was co-hosted by the MIT Industrial Liaison Program.

Six topical sessions highlighted recent developments in quantum science and engineering, materials, advanced electronics, energy, biology, and immersive data technology. The Nano Summit also featured startup ventures and an art exhibition.

Seeing and manipulating at the nanoscale — and beyond

“We need to develop new ways of building the next generation of materials,” said Frances Ross, the TDK Professor in Materials Science and Engineering (DMSE). “We need to use electron microscopy to help us understand not only what the structure is after it’s built, but how it came to be. I think the next few years in this piece of the nano realm are going to be really amazing.”

Speakers in the session “The Next Materials Revolution,” chaired by MIT.nano co-director for Characterization.nano and associate professor in DMSE James LeBeau, highlighted areas in which cutting-edge microscopy provides insights into the behavior of functional materials at the nanoscale, from anti-ferroelectrics to thin-film photovoltaics and 2D materials. They shared images and videos collected using the instruments in MIT.nano’s characterization suites, which were specifically designed and constructed to minimize mechanical-vibrational and electro-magnetic interference.

Later, in the “Biology and Human Health” session chaired by Boris Magasanik Professor of Biology Thomas Schwartz, biologists echoed the materials scientists, stressing the importance of the ultra-quiet, low-vibration environment in Characterization.nano to obtain high-resolution images of biological structures.

“Why is MIT.nano important for us?” asked Schwartz. “An important element of biology is to understand the structure of biology macromolecules. We want to get to an atomic resolution of these structures. CryoEM (cryo-electron microscopy) is an excellent method for this. In order to enable the resolution revolution, we had to get these instruments to MIT. For that, MIT.nano was fantastic.”

Seychelle Vos, the Robert A. Swanson (1969) Career Development Professor of Life Sciences, shared CryoEM images from her lab’s work, followed by biology Associate Professor Joey Davis who spoke about image processing. When asked about the next stage for CryoEM, Davis said he’s most excited about in-situ tomography, noting that there are new instruments being designed that will improve the current labor-intensive process.

To chart the future of energy, chemistry associate professor Yogi Surendranath is also using MIT.nano to see what is happening at the nanoscale in his research to use renewable electricity to change carbon dioxide into fuel.

“MIT.nano has played an immense role, not only in facilitating our ability to make nanostructures, but also to understand nanostructures through advanced imaging capabilities,” said Surendranath. “I see a lot of the future of MIT.nano around the question of how nanostructures evolve and change under the conditions that are relevant to their function. The tools at MIT.nano can help us sort that out.”

Tech transfer and quantum computing

The “Advanced Electronics” session chaired by Jesús del Alamo, the Donner Professor of Science in the Department of Electrical Engineering and Computer Science (EECS), brought together industry partners and MIT faculty for a panel discussion on the future of semiconductors and microelectronics. “Excellence in innovation is not enough, we also need to be excellent in transferring these to the marketplace,” said del Alamo. On this point, panelists spoke about strengthening the industry-university connection, as well as the importance of collaborative research environments and of access to advanced facilities, such as MIT.nano, for these environments to thrive.

The session came on the heels of a startup exhibit in which eleven START.nano companies presented their technologies in health, energy, climate, and virtual reality, among other topics. START.nano, MIT.nano’s hard-tech accelerator, provides participants use of MIT.nano’s facilities at a discounted rate and access to MIT’s startup ecosystem. The program aims to ease hard-tech startups’ transition from the lab to the marketplace, surviving common “valleys of death” as they move from idea to prototype to scaling up.

When asked about the state of quantum computing in the “Quantum Science and Engineering” session, physics professor Aram Harrow related his response to these startup challenges. “There are quite a few valleys to cross — there are the technical valleys, and then also the commercial valleys.” He spoke about scaling superconducting qubits and qubits made of suspended trapped ions, and the need for more scalable architectures, which we have the ingredients for, he said, but putting everything together is quite challenging.

Throughout the session, William Oliver, professor of physics and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science, asked the panelists how MIT.nano can address challenges in assembly and scalability in quantum science.

“To harness the power of students to innovate, you really need to allow them to get their hands dirty, try new things, try all their crazy ideas, before this goes into a foundry-level process,” responded Kevin O’Brien, associate professor in EECS. “That’s what my group has been working on at MIT.nano, building these superconducting quantum processors using the state-of-the art fabrication techniques in MIT.nano.”

Connecting the digital to the physical

In his reflections on the semiconductor industry, Douglas Carlson, senior vice president for technology at MACOM, stressed connecting the digital world to real-world application. Later, in the “Immersive Data Technology” session, MIT.nano associate director Brian Anthony explained how, at the MIT.nano Immersion Lab, researchers are doing just that.

“We think about and facilitate work that has the human immersed between hardware, data, and experience,” said Anthony, principal research scientist in mechanical engineering. He spoke about using the capabilities of the Immersion Lab to apply immersive technologies to different areas — health, sports, performance, manufacturing, and education, among others. Speakers in this session gave specific examples in hardware, pediatric health, and opera.

Anthony connected this third pillar of MIT.nano to the fab and characterization facilities, highlighting how the Immersion Lab supports work conducted in other parts of the building. The Immersion Lab’s strength, he said, is taking novel work being developed inside MIT.nano and bringing it up to the human scale to think about applications and uses.

Artworks that are scientifically inspired

The Nano Summit closed with a reception at MIT.nano where guests could explore the facility and gaze through the cleanroom windows, where users were actively conducting research. Attendees were encouraged to visit an exhibition on MIT.nano’s first- and second-floor galleries featuring work by students from the MIT Program in Art, Culture, and Technology (ACT) who were invited to utilize MIT.nano’s tool sets and environments as inspiration for art.

In his closing remarks, Bulović reflected on the community of people who keep MIT.nano running and who are using the tools to advance their research. “Today we are celebrating the facility and all the work that has been done over the last five years to bring it to where it is today. It is there to function not just as a space, but as an essential part of MIT’s mission in research, innovation, and education. I hope that all of us here today take away a deep appreciation and admiration for those who are leading the journey into the nano age.”

We develop a general perturbative theory of finite-coupling quantum thermometry up to second order in probe-sample interaction. By assumption, the probe and sample are in thermal equilibrium, so the probe is described by the mean-force Gibbs state. We prove that the ultimate thermometric precision can be achieved – to second order in the coupling – solely by means of local energy measurements on the probe. Hence, seeking to extract temperature information from coherences or devising adaptive schemes confers no practical advantage in this regime. Additionally, we provide a closed-form expression for the quantum Fisher information, which captures the probe's sensitivity to temperature variations. Finally, we benchmark and illustrate the ease of use of our formulas with two simple examples. Our formalism makes no assumptions about separation of dynamical timescales or the nature of either the probe or the sample. Therefore, by providing analytical insight into both the thermal sensitivity and the optimal measurement for achieving it, our results pave the way for quantum thermometry in setups where finite-coupling effects cannot be ignored.

In light of recently proposed quantum algorithms that incorporate symmetries in the hope of quantum advantage, we show that with symmetries that are restrictive enough, classical algorithms can efficiently emulate their quantum counterparts given certain classical descriptions of the input. Specifically, we give classical algorithms that calculate ground states and time-evolved expectation values for permutation-invariant Hamiltonians specified in the symmetrized Pauli basis with runtimes polynomial in the system size. We use tensor-network methods to transform symmetry-equivariant operators to the block-diagonal Schur basis that is of polynomial size, and then perform exact matrix multiplication or diagonalization in this basis. These methods are adaptable to a wide range of input and output states including those prescribed in the Schur basis, as matrix product states, or as arbitrary quantum states when given the power to apply low depth circuits and single qubit measurements.

We have generalized the well-known statement that the Clifford group is a unitary 3-design into symmetric cases by extending the notion of unitary design. Concretely, we have proven that a symmetric Clifford group is a symmetric unitary 3-design if and only if the symmetry constraint is described by some Pauli subgroup. We have also found a complete and unique construction method of symmetric Clifford groups with simple quantum gates for Pauli symmetries. For the overall understanding, we have also considered physically relevant U(1) and SU(2) symmetry constraints, which cannot be described by a Pauli subgroup, and have proven that the symmetric Clifford group is a symmetric unitary 1-design but not a 2-design under those symmetries. Our findings are numerically verified by computing the frame potentials, which measure the difference in randomness between the uniform ensemble on the symmetric group of interest and the symmetric unitary group. This work will open a new perspective into quantum information processing, such as randomized benchmarking, and give a deep understanding to many-body systems, such as monitored random circuits.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Older techniques inspire new discoveries for ultracold molecules

Friday, November 24, 2023

Sometimes, new scientific discoveries can be made from looking at well-known methods or experimental techniques in new ways. This is the basis for new research from Dr. Alan Jamison, a faculty member at the Institute for Quantum Computing (IQC) and the University of Waterloo’s Department of Physics and Astronomy, and his collaborators at the Massachusetts Institute of Technology (MIT).

Jamison researches ultra-cold molecules, which are made by cooling down atoms to nearly absolute zero in an atom trap. Once formed, these molecules can then be studied for applications including quantum-state-controlled chemistry, quantum simulations, and quantum information processing. One of the first great successes of cooling atoms to ultracold temperatures was the observation of the Bose-Einstein condensate. This was first achieved experimentally using magnetic atom traps in the mid 1990s by researchers including Jamison’s collaborator, Dr. Wolfgang Ketterle, for which Ketterle was awarded the 2001 Nobel Prize in Physics.

Since this time, however, while magnetic traps are sometimes used during the process of cooling atoms, it has become more common for researchers to use optical lasers to trap atoms during experiments. The optical traps are faster and can trap a wider range of atoms and molecules than just those with the specific magnetic properties needed to use the magnetic traps.

“When people started making ultracold molecules, they had to be in an optical trap to hold the right atomic states to make the molecules, and so you just naturally did the experiments with the molecules also in an optical trap,” said Jamison. “But it turns out that some ultracold molecules which were expected to be chemically stable seem to be undergoing chemical reactions caused by the light from the optical traps.”

Jamison and his collaborators reasoned that if they could remove the requirement of light from their experiments by using magnetic traps, they could then study these light-induced chemical reactions in controlled environments and explore new and exciting results.

“We study one of the few ultracold molecules that can be magnetically trapped, which gave us the freedom to study these older techniques in new ways,” said Jamison. “It’s exciting looking at these reactions without having to worry about what the light is doing. On one hand, it constrains us to only work with states that are magnetically trappable, but on the other hand it removes the constraint that we always need to have light on in the background.”

To combine the best properties of magnetic and optical traps, their experiment used both trapping techniques in a new combined experimental design that removed the need for atoms to be moved between the different trap types. Atoms of sodium and lithium were cooled down to ultracold temperatures using a combination of magnetic and optical cooling techniques. To form the ultracold NaLi molecules, optical trapping was necessary, however, upon formation, the molecules can be trapped again by magnetism, so the laser light was removed.

The researchers used their newly developed trapping design to measure inelastic collisions of the molecules as a proof of concept. Their success is now inspiring studies focused on a variety of different effects, such as studying how molecules respond to the introduction of light, studying the previously problematic light-induced chemical reactions in controlled ways, or seeing if the lifetime of these ultracold molecules can be prolonged with the different trapping method.

“By looking at what's considered an older way of doing things, we're finding that we have new possibilities for the future and how we work with our molecules,” said Jamison. “It’s important to always be looking forward, but also not lose sight of what's been done in the past. People had different interests and different focus in the past, so a lot of times, they thought through things in a way you didn't, or they've done something that you forgot could be done.”

In general, a bulk diamond sensor requires a high-power laser, which hinders the growth of NV numbers and thus limits the final sensitivity. Here, a relaxometry-based microwave magnetometer is given, which shows that the power density is T1-limited. By cooling the diamond sensor, the required power density reduces to 0.077 Wcm^{−2}, 10^{−6} of the saturation value.

Abstract

The nitrogen-vacancy (NV) center in diamond is a unique magnetometer. Its atomic size enables integrations of a tremendous amount (n_{NV}) of NV centers in a bulk diamond with a sensitivity scaling as 1/nNV$1/\sqrt {n_{\rm NV}}$. However, such a bulk sensor requires a high-power laser to polarize and read out the NV centers. The increasing thermal damage and additional noises associated with high-power lasers hinder the growth of n_{NV}, and thus limit the sensitivity at picotesla level. Here, it shows a relaxometry-based microwave magnetometer that the power density is determined by the relaxation time T_{1}. By cooling the diamond sensor to prolong the T_{1} (≈s), the required power density further reduces to 0.077Wcm−2$0.077\nobreakspace {\rm Wcm^{-2}}$, ≈10−6$\approx \ 10^{-6}$ of the saturation value. This work paves the way for the utilization of large-size diamond to promote the sensitivity of diamond magnetometer to femtotesla level and beyond.

Presenting a novel multi-class quantum kernel-based classifier. With this classifier, the number of qubits required, the measurement strategy, and the topology of the circuits used are invariant to the number of classes. Analytical results and numerical simulations show that this classifier is not only effective when applied to diverse classification problems but also robust under certain noise conditions.

Abstract

Multi-class classification problems are fundamental in many varied domains in research and industry. A popular strategy for solving multi-class classification problems involves first transforming the problem into many binary classification problems. However, this requires the number of binary classification models that need to be developed to grow with the number of classes. Recent work in quantum machine learning has seen the development of multi-class quantum classifiers that circumvent this growth by learning a mapping between the data and a set of label states. This work presents the first multi-class SWAP-Test classifier inspired by its binary predecessor and the use of label states in recent work. With this classifier, the cost of developing multiple models is avoided. In contrast to previous work, the number of qubits required, the measurement strategy, and the topology of the circuits used is invariant to the number of classes. In addition, unlike other architectures for multi-class quantum classifiers, the state reconstruction of a single qubit yields sufficient information for multi-class classification tasks. Both analytical results and numerical simulations show that this classifier is not only effective when applied to diverse classification problems but also robust to certain conditions of noise.

Achievability in information theory refers to demonstrating a coding strategy that accomplishes a prescribed performance benchmark for the underlying task. In quantum information theory, the crafted Hayashi-Nagaoka operator inequality is an essential technique in proving a wealth of one-shot achievability bounds since it effectively resembles a union bound in various problems. In this work, we show that the so-called pretty-good measurement naturally plays a role as the union bound as well. A judicious application of it considerably simplifies the derivation of one-shot achievability for classical-quantum channel coding via an elegant three-line proof. The proposed analysis enjoys the following favorable features. (i) The established one-shot bound admits a closed-form expression as in the celebrated Holevo-Helstrom theorem. Namely, the average error probability of sending $M$ messages through a classical-quantum channel is upper bounded by the minimum error of distinguishing the joint channel input-output state against $(M-1)$ decoupled product states. (ii) Our bound directly yields asymptotic achievability results in the large deviation, small deviation, and moderate deviation regimes in a unified manner. (iii) The coefficients incurred in applying the Hayashi-Nagaoka operator inequality or the quantum union bound are no longer needed. Hence, the derived one-shot bound sharpens existing results relying on the Hayashi-Nagaoka operator inequality. In particular, we obtain the tightest achievable $\epsilon $-one-shot capacity for classical communication over quantum channels heretofore, improving the third-order coding rate in the asymptotic scenario. (iv) Our result holds for infinite-dimensional Hilbert space. (v) The proposed method applies to deriving one-shot achievability for classical data compression with quantum side information, entanglement-assisted classical communication over quantum channels, and various quantum network information-processing protocols.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Previously proposed quantum algorithms for solving linear systems of equations cannot be implemented in the near term due to the required circuit depth. Here, we propose a hybrid quantum-classical algorithm, called Variational Quantum Linear Solver (VQLS), for solving linear systems on near-term quantum computers. VQLS seeks to variationally prepare $|x\rangle$ such that $A|x\rangle\propto|b\rangle$. We derive an operationally meaningful termination condition for VQLS that allows one to guarantee that a desired solution precision $\epsilon$ is achieved. Specifically, we prove that $C \geqslant \epsilon^2 / \kappa^2$, where $C$ is the VQLS cost function and $\kappa$ is the condition number of $A$. We present efficient quantum circuits to estimate $C$, while providing evidence for the classical hardness of its estimation. Using Rigetti's quantum computer, we successfully implement VQLS up to a problem size of $1024\times1024$. Finally, we numerically solve non-trivial problems of size up to $2^{50}\times2^{50}$. For the specific examples that we consider, we heuristically find that the time complexity of VQLS scales efficiently in $\epsilon$, $\kappa$, and the system size $N$.

Quantum systems are inherently open and susceptible to environmental noise, which can have both detrimental and beneficial effects on their dynamics. This phenomenon has been observed in biomolecular systems, where noise enables novel functionalities, making the simulation of their dynamics a crucial target for digital and analog quantum simulation. Nevertheless, the computational capabilities of current quantum devices are often limited due to their inherent noise. In this work, we present a novel approach that capitalizes on the intrinsic noise of quantum devices to reduce the computational resources required for simulating open quantum systems. Our approach combines quantum noise characterization methods with quantum error mitigation techniques, enabling us to manipulate and control the intrinsic noise in a quantum circuit. Specifically, we selectively enhance or reduce decoherence rates in the quantum circuit to achieve the desired simulation of open-system dynamics. We provide a detailed description of our methods and report on the results of noise characterization and quantum error mitigation experiments conducted on both real and emulated IBM Quantum computers. Additionally, we estimate the experimental resource requirements for our techniques. Our approach holds the potential to unlock new simulation techniques in noisy intermediate-scale quantum devices, harnessing their intrinsic noise to enhance quantum computations.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Quantum state discrimination (QSD) is a fundamental task in quantum information processing with numerous applications. We present a variational quantum algorithm that performs the minimum-error QSD, called the variational quantum state discriminator (VQSD). The VQSD uses a parameterized quantum circuit that is trained by minimizing a cost function derived from the QSD, and finds the optimal positive-operator valued measure (POVM) for distinguishing target quantum states. The VQSD is capable of discriminating even unknown states, eliminating the need for expensive quantum state tomography. Our numerical simulations and comparisons with semidefinite programming demonstrate the effectiveness of the VQSD in finding optimal POVMs for minimum-error QSD of both pure and mixed states. In addition, the VQSD can be utilized as a supervised machine learning algorithm for multi-class classification. The area under the receiver operating characteristic curve obtained in numerical simulations with the Iris flower dataset ranges from 0.97 to 1 with an average of 0.985, demonstrating excellent performance of the VQSD classifier.