Normal view

There are new articles available, click to refresh the page.
Today — 30 November 2023Main stream
Yesterday — 29 November 2023Main stream

Research Roundup for November 2023

By: dougfinke
29 November 2023 at 15:47

By Dr Chris Mansell, Senior Scientific Writer at Terra Quantum Shown below are summaries of a few interesting research papers in quantum technology that we have seen over the past month. Hardware Title: Electron charge qubit with 0.1 millisecond coherence timeOrganizations: Argonne National Laboratory; University of Chicago; Lawrence Berkeley National Laboratory; The NSF AI Institute [...]

The post Research Roundup for November 2023 appeared first on Quantum Computing Report.

Quantum Deep Hedging

Quantum 7, 1191 (2023).

https://doi.org/10.22331/q-2023-11-29-1191

Quantum machine learning has the potential for a transformative impact across industry sectors and in particular in finance. In our work we look at the problem of hedging where deep reinforcement learning offers a powerful framework for real markets. We develop quantum reinforcement learning methods based on policy-search and distributional actor-critic algorithms that use quantum neural network architectures with orthogonal and compound layers for the policy and value functions. We prove that the quantum neural networks we use are trainable, and we perform extensive simulations that show that quantum models can reduce the number of trainable parameters while achieving comparable performance and that the distributional approach obtains better performance than other standard approaches, both classical and quantum. We successfully implement the proposed models on a trapped-ion quantum processor, utilizing circuits with up to $16$ qubits, and observe performance that agrees well with noiseless simulation. Our quantum techniques are general and can be applied to other reinforcement learning problems beyond hedging.

Generation of Quantum Optical Frequency Combs in Topological Resonators

Generation of Quantum Optical Frequency Combs in Topological Resonators

Topological quantum frequency comb is a burgeoning topic that combines topological phases and quantum systems, which inspires many intriguing sparks in topological quantum optics. Producing quantum frequency combs in valley photonic crystal topological resonators can introduce the robustness to quantum states in integrated photonic devices.


Abstract

Recent advances in manipulating topological phases in quantum systems have promised integrated quantum devices with conspicuous functionalities, such as robustness against fabrication defects. At the same time, the introduction of quantum frequency combs enables extreme expansion of quantum resources. Here, it theoretically propose the generation of high-dimensional entangled quantum frequency combs via four-wave mixing processes in the valley-Hall topological resonators. Specifically, it demonstrates two irregular photonic crystal resonators supporting the whispering-gallery resonator modes that lead to coherent quantum frequency combs at telecommunication wavelengths. By using the Schmidt decomposition, It shows that the quantum frequency combs are frequency entangled, and it also concludes that the effective dimensions of quantum frequency combs in these two resonators are at least seven and six, respectively. Moreover, these quantum frequency combs inherit the topological protection of valley kink states, showing robustness against defects in the resonators. The topological quantum frequency combs have shown intriguing potentiality in the generation and control of topologically protected high-dimensional quantum states in integrated photonic crystal platforms.

High‐Performance Telecom‐Wavelength Biphoton Source from a Hot Atomic Vapor Cell

High-Performance Telecom-Wavelength Biphoton Source from a Hot Atomic Vapor Cell

Introducing a groundbreaking achievement in the field of quantum optics and communication, the research unveils a high-performance telecom-wavelength biphoton source from a hot 87Rb atomic vapor cell. With its remarkable advantages of compatibility with existing telecom networks, seamless long-distance communication, exceptional efficiency, and minimal noise, the work paves the way for the realization of optical-fiber-based quantum communications and networks.


Abstract

Telecom-band quantum light sources are critical to the development of long-distance quantum communication technologies. A high-performance telecom-wavelength biphoton source from a hot 87Rb atomic vapor cell is reported. Time-correlated biphotons are generated from the cascade-type 5S1/2–5P3/2–4D5/2 transition of 87Rb via a spontaneous four-wave mixing process. The maximum value gSI(2)(τ)$g_{{\mathrm{SI}}}^{( 2 )}( \tau )$ of biphoton cross-correlation to be 44(3) is achieved, under the condition of a high optical depth of 112(3), including two-photon absorption, with a spectral width of approximately 300 MHz. The coincidence count rate of biphoton is estimated to be of the order of 38 000 cps mW−1. It is believed that the telecom-wavelength biphoton source from an atomic vapor cell can be applied in long-distance quantum networks and practical quantum repeaters based on atom–photon interactions.

Anomalous Loss Reduction Below Two‐Level System Saturation in Aluminum Superconducting Resonators

Anomalous Loss Reduction Below Two-Level System Saturation in Aluminum Superconducting Resonators

A capacitively-coupled coplanar waveguide microwave resonator is fabricated and characterized, revealing an unconventional reduction of loss with decreasing temperature below 50 mK at low photon numbers. This anomalous behavior is attributed to the response bandwidth of a single two-level system (TLS) dropping below the TLS-resonance detuning at low temperatures, reducing the intrinsic loss of the resonator.


Abstract

Superconducting resonators are widely used in many applications such as qubit readout for quantum computing, and kinetic inductance detectors. These resonators are susceptible to numerous loss and noise mechanisms, especially the dissipation due to two-level systems (TLS) which become the dominant source of loss in the few-photon and low temperature regime. In this study, capacitively-coupled aluminum half-wavelength coplanar waveguide resonators are investigated. Surprisingly, the loss of the resonators is observed to decrease with a lowering temperature at low excitation powers and temperatures below the TLS saturation. This behavior is attributed to the reduction of the TLS resonant response bandwidth with decreasing temperature and power to below the detuning between the TLS and the resonant photon frequency in a discrete ensemble of TLS. When response bandwidths of TLS are smaller than their detunings from the resonance, the resonant response and thus the loss is reduced. At higher excitation powers, the loss follows a logarithmic power dependence, consistent with predictions from the generalized tunneling model (GTM). A model combining the discrete TLS ensemble with the GTM is proposed and matches the temperature and power dependence of the measured internal loss of the resonator with reasonable parameters.

Defining logical qubits: criteria for Resilient Quantum Computation

28 November 2023 at 17:00

What is a logical qubit?

In June 2023, we offered how quantum computing must graduate through three implementation levels (quantum computing implementation levels QCILs) to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale.  All quantum computing technologies today are at Level 1. And while NISQ machines are all around us, they do not offer practical quantum advantage.  True utility will only come from orchestrating resilient computation across a sea of logical qubits something that, to the best of our current knowledge, can only be achieved with error correction and fault tolerance.  Fault tolerance will be a necessary and essential ingredient in any quantum supercomputer, and for any practical quantum advantage.The first step toward the goal of reaching practical quantum advantage is to demonstrate resilient computation on a logical qubit.  However, just one logical qubit will not be enough; ultimately the goal is to show that quantum error correction helps non-trivial computation instead of hindering, and an important element of this non-triviality is the interaction between qubits and their entanglement.  Demonstrating an error corrected resilient computation, initially on two logical qubits, that outperforms the same computation on physical qubits, will mark the first demonstration of a resilient computation in our field's history.The race is on to demonstrate a resilient logical qubit but what is a meaningful demonstration?  Before our industry can declare a victory on reaching Level 2 for a given quantum computing hardware and claim the demonstration of a resilient logical qubit, it's important to align on what this means.Criteria of Level 2: resilient quantum computation

How should we define a logical qubit?  The most meaningful definition of a logical qubit hinges on what one can do with that qubit demonstrating a qubit that can only remain idle, that is, be preserved in a memory, is not meaningful if one cannot demonstrate non-trivial operations as well.  Therefore, it makes sense to define a logical qubit such that it allows some non-trivial, encoded computation to be performed on it.Distinct hardware comes with distinct native operations. This presents a significant challenge in formally defining a logical qubit; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that mark the entrance into the resilient level of quantum computation.  In other words, these are the criteria for calling something a "logical qubit".Entrance criteria to Level 2Exiting Level 1 NISQ computing and entering Level 2 Resilient quantum computing is achieved when fewer errors are observed on the output of a logical circuit using quantum error correction than on the same analogous physical circuit without error correction.We argue that a demonstration of the resilient level of quantum computation must satisfy the following criteria:

  involve at least 2 logical qubitsdemonstrate convincingly large separation (ideally 5-10x) of logical error rate < physical error rate on the non-trivial logical circuitcorrect all individual circuit faults ("fault distance" must be at least 3)implement a non-trivial logical operation that generates entanglement between logical qubitsThe justification for these is self-evident being able to correct errors is how resiliency is achieved and demonstrating an improvement over physical error rates is precisely what we mean by resiliency but we feel that it is worth emphasizing the requirement for logical entanglement.  Our goal is to achieve advantage with a quantum computer, and an important ingredient to advantage is entanglement across at least 2 logical qubits.The distinction between Resilient Level and the Scale Level is also important to emphasize a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine.  For this reason, we find it important to allow some forms of post-selection, with the following requirements

  Post-selection acceptance criteria must be computable in real-time (but may be implemented in post-processing for demonstration);scalable post-selection (rejection rate can be made vanishingly small)if post-selection is not scalable, it must at least correct all low weight errors in the computations (with the exception of state-preparation, since post-selection in state-preparation is scalable);In other words, post-selection must be either fully compatible with scalability, or it must still allow for demonstration of the key ingredients of error correction, not simply error detection.Measuring progress across Level 2Once a quantum computing hardware has entered the Resilient Level, it is important to also be able to measure continued progress toward Level 3.  Not every type of quantum computing hardware will achieve Level 3 Scale, as the requirements to reach Scale include achieving upwards of 1000 logical qubits with logical error rates better than 10-12 and mega-rQOPS and more.Progress toward scale may be measured along four axes: universality, scalability, fidelity, composability. We offer the following ideas to the community on how to measure progress across these four axes, such that we as a community can benchmark progress in the resilient level of utility scale quantum computation:

  Universality: universality typically splits into two components: Clifford group gates and non-Clifford group gates. Does one have a set of high-fidelity Clifford-complete logical operations? Does one have a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware may employ different strategies.Scalability: At its core, resource requirement for advantage must be reasonable (i.e., small fraction of Earth's resources or a person's lifetime). More technically, quantum resource overhead required should scale polynomially with target logical error rate of any quantum algorithm. Note also that some systems may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing distance) may be difficult.Fidelity: Logical error rates of all operations improve with code size (sub-threshold). More strictly, one would like to see logical error rate is better than physical error rate (sub-pseudothreshold). Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or with other operational tasks such as Bell inequality violations and self-testing protocols.Composability: Composable gadgets for all logical operations. Criteria to advance from Level 2 to Level 3, a Quantum SupercomputerThe exit of the resilient level of logical computation, and the achievement of the world's first quantum supercomputer, will be marked by large depth computations on high fidelity circuits involving upwards of hundreds of logical qubits. For example, a logical circuit on ~100+ logical qubits with a universal set of composable logical operations hitting a fidelity of ~10e-8 or better.  Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits with logical error rate of 10^-12 and a mega-rQOPS.  Performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).Conclusion

It's no doubt an exciting time to be in quantum computing.  Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage.  If you have thoughts on these criteria for a logical qubit, or how to measure progress, we'd love to hear from you.

The post Defining logical qubits: criteria for Resilient Quantum Computation appeared first on Microsoft Azure Quantum Blog.

AWS and NVIDIA expand partnership to advance generative AI

By: Ryan Daws
29 November 2023 at 14:30

Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

Qudit Entanglers Using Quantum Optimal Control

Abstract

We study the generation of two-qudit entangling quantum logic gates using two techniques in quantum optimal control. We take advantage of both continuous, Lie algebraic control and digital, Lie group control. In both cases, the key is access to a time-dependent Hamiltonian, which can generate an arbitrary unitary matrix in the group SU(d2). We find efficient protocols for creating high-fidelity entangling gates. As a test of our theory, we study the case of qudits robustly encoded in nuclear spins of alkaline earth atoms and manipulated with magnetic and optical fields, with entangling interactions arising from the well-known Rydberg blockade. We applied this in a case study based on a d=10 dimensional qudit encoded in the I=9/2 nuclear spin in 87Sr, controlled through a combination of nuclear spin resonance, a tensor ac-Stark shift, and Rydberg dressing, which allows us to generate an arbitrary symmetric entangling two-qudit gate, such as CPhase. Our techniques can be used to implement qudit entangling gates for any 2d10 encoded in the nuclear spin. We also studied how decoherence due to the finite lifetime of the Rydberg states affects the creation of the CPhase gate and found, through numerical optimization, a fidelity of 0.9985, 0.9980, 0.9942, and 0.9800 for d=2, d=3, d=5, and d=7, respectively. This provides a powerful platform to explore the various applications of quantum information processing of qudits, including metrological enhancement with qudits, quantum simulation, universal quantum computation, and quantum error correction.

3 More
  • Received 22 December 2022
  • Accepted 30 October 2023

DOI:https://doi.org/10.1103/PRXQuantum.4.040333

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

Quantum Information, Science & Technology

Riverlane Awarded DARPA Quantum Benchmarking Program Grant

29 November 2023 at 11:32
a white antenna with many small balls

Insider Brief

  • Riverlane has been selected for the next phase of DARPA’s Quantum Benchmarking program.
  • The program’s aim is to design key quantum computing metrics.
  • Riverlane will be working with top tier universities such as the University of Southern California and the University of Sydney.

PRESS RELEASE — Riverlane has been selected for Phase 2 of the Quantum Benchmarking program funded by the Defense Advanced Research Projects Agency (DARPA).

The aim of the DARPA Quantum Benchmarking program is to design key quantum computing metrics for practically relevant problems and estimate the required quantum and classical resources needed to reach critical performance thresholds.

Steve Brierley, CEO and Founder of Riverlane, said: “Riverlane’s mission is to make quantum computing useful sooner, starting an era of human progress as significant as the industrial and digital revolutions. The DARPA Quantum Benchmarking program aligns with this goal, helping the quantum community measure progress and maintain momentum as we reach unlock quantum error correction and enable fault tolerance.”

Fault tolerance is increasingly seen as a requirement for reaching useful quantum advantage. To achieve this, the errors that quantum bits (qubits) are prone to must be corrected. Simply put, quantum error correction is the enabling technology for fault tolerance.

Hardware companies, academic groups and national labs have demonstrated significant progress with small quantum error-corrected systems, but there remain many challenges for controlling fault-tolerant devices at scale.

In the DARPA Quantum Benchmarking project, Riverlane is working with top tier universities such as the University of Southern California and the University of Sydney to identify important benchmarks for practical problems especially in the fields of plasma physics, fluid dynamics, condensed matter and high energy physics. The team is building tools to estimate the quantum and classical resources needed to implement quantum algorithms to solve the benchmark problems at scale.

Hari Krovi, Principal Quantum Scientist at Riverlane, explained: “Fault tolerance will result in significant overheads, both in terms of qubit count and calculation time and it is important to take this into consideration when comparing to classical techniques. It has been known for some time that mild speed-ups such as a quadratic speed-up can disappear when the fault tolerance overhead is considered. There are many different approaches to fault tolerance to consider and each one leads to overheads that can vary by many orders of magnitude.”

Krovi added: “One area of consideration is the choice of quantum code to help identify and correct errors in the system. There are many different choices that lead to fault tolerance and each of these leads to different overheads. The Surface Code is a popular choice, and the team is focussing on estimates based on this approach.”

The work being done in this program provides a quantitative understanding of practical quantum advantage and can inform whether and how disruptive quantum computing is in various fields.

AWS Reveals Quantum Chip That Suppresses Bit Flip Errors by 100X

29 November 2023 at 11:28
AWS

Insider Brief

  • Amazon Web Services (AWS) has introduced a new quantum computer chip focused on enhancing error correction.
  • The company said that the chip, which is fabricated in-house, can suppress bit flip errors by 100x using a passive error correction approach.
  • By combining both passive and active error correction approaches, the chip could theoretically achieve quantum error correction six times more efficiently than standard methods.
  • Image:  Peter Desantis, senior vice president of AWS utility computing products. Credit: AWS

Amazon Web Services (AWS) has introduced a new quantum computer chip focused on enhancing error correction, a pivotal — if not the pivotal — aspect in the evolution of quantum computing. Peter DeSantis, Vice President of Global Infrastructure and Customer Support at AWS, detailed the features and implications of this development in a keynote address in Las Vegas at AWS’s re:Invent conference for the global cloud computing community.

The AWS team has been working on a custom-designed quantum device, a chip totally fabricated in-house, which takes an innovative approach to error correction, according to DeSantis.

“By separating the bit flips from the phase flips, we’ve been able to suppress bit flip errors by 100x using a passive error correction approach. This allows us to focus our active error correction on just those phase flips,” DeSantis stated.

He highlighted that combining both passive and active error correction approaches could theoretically achieve quantum error correction six times more efficiently than standard methods. This development represents an essential step towards creating hardware-efficient and scalable quantum error correction.

In a LinkedIn post, Simone Severini, general manager of quantum technologies at AWS, writes that AWS’s logical qubit is both hardware-efficient and scalable.

He writes that the chip uses a special oscillator-based qubit to suppresses bit flip errors. A much simpler outer error-correcting code protects the phase flip errors.

Severini added, “It is based on a superconducting quantum circuit technology that “prints” qubits on the surface of a silicon microchip, making it highly scalable in the number of physical qubits. This scalability allows one to exponentially suppress the total logical error rate by adding more physical qubits to the chip. Other approaches based on similar oscillator-based qubits rely on large 3D resonant cavities, that need to be manually pieced together.”

Error Correction Progress

DeSantis said that the effort on error correction is important because, despite advancements, qubits remain too noisy for practical use in solving complex problems.

“15 years ago, the state of the art was one error every 10 Quantum operations. Today, we’ve improved to about one error per 1000 Quantum operations. This 100x improvement in 15 years is significant. However, the quantum algorithms that excite us require billions of operations without an error,” DeSantis added.

DeSantis outlined the challenges in current quantum computing, noting that with a 0.1% error rate, each logical qubit requires thousands of physical qubits. He mentioned that quantum computers are not yet where they need to be to tackle big, complex problems. The potential for improvements through error correction represents the surest bet for more practical quantum computing.

“With a further improvement in physical qubit error rate, we can reduce the overhead of error correction significantly,” he said.

Early Stages

Although DeSantis cautioned that the journey to an error-corrected quantum computer is still in its early stages, he emphasized the importance of this development.

“This step taken is an important part of developing the hardware efficient and scalable quantum error correction that we need to solve interesting problems on a quantum computer,” DeSantis said.

DeSantis hopes this development could accelerate the progress towards practical and reliable quantum computing, potentially revolutionizing industries like pharmaceuticals, materials science, and financial services.

Multiverse Computing Pioneers Quantum Digital Twin Project to Boost Green Hydrogen Production

29 November 2023 at 11:25
H2 Hydrogen clear energy Ecological future Alternative concept Environmental technology Blue sky

Insider Brief

  • Multiverse Computing used a digital twin and quantum optimization to boost the efficiency of green hydrogen production.
  • The advance could lead to improving the economics of hydrogen production and reducing a significant source of greenhouse gas.
  • Multiverse’s partners include IDEA Ingeniería and AMETIC, Spain’s digital industry association.

PRESS RELEASE —  Multiverse Computing, a global leader in value-based quantum computing and machine learning solutions, has used a digital twin and quantum optimization to boost the efficiency of green hydrogen production. This work could change the economics of hydrogen production and reduce a significant source of greenhouse gas.

Multiverse’s partners in this work are IDEA Ingeniería, an engineering firm that specializes in renewable projects and digital twins, and AMETIC, Spain’s digital industry association. IDEA developed the digital twin ecosystem for optimizing the generation of green hydrogen. AMETIC is coordinating the overall project.

The quantum digital twin numerically simulates a green hydrogen production plant by using operating parameters of the plant as inputs. By using quantum algorithms to optimize the electrolysis process used for green hydrogen generation, the developed solution achieves a 5% increase in H2 production and associated revenue delivered by the quantum solver compared to the classical solver.

“Electrolysers are currently deployed at a small scale, making hydrogen production costly, so they require significant scale up in an affordable way,” said Enrique Lizaso Olmos, CEO of Multiverse Computing. “This project demonstrates how quantum algorithms can improve the production of green hydrogen to make renewable energy more cost-effective today and in the future.”

Using a classical solver to optimize hydrogen production, the virtual plant delivered 62,579 kg of green H2 and revenue of 154,204 euros. By using quantum-inspired tensor networks with Multiverse’s Singularity, the quantum approach delivered 65,421 kg and revenue of 160,616 euros. This represents a 5% increase in hydrogen production and a 5% increase in revenues produced.

“Green hydrogen will play a significant role in the transition towards a more sustainable and ecological energy landscape,” said Emilio Sanchez, Founder and CEO of IDEA Ingeniería. “The consortium’s continued progress in developing quantum solutions alongside other green technologies can help alleviate the effects of global warming.”

Currently, it’s more expensive to produce green hydrogen than traditional grey hydrogen.1 The traditional method uses electricity—usually generated by coal or natural gas—to separate water into hydrogen and oxygen. Green hydrogen is produced from renewable sources.

About 70 million tons of hydrogen are produced every year and used to refine oil and make ammonia-based fertilizer. The grey hydrogen production process generates between 9 and 12 tons of carbon dioxide for every one ton of hydrogen produced.2 Green hydrogen created from renewable sources is a clean-burning fuel that could reduce emissions from heating and industrial processes such as the production of steel, cement, and fertilizer.

Green hydrogen also could enable more efficient energy storage, as compressed hydrogen tanks can store energy for long periods of time and weigh less than lithium-ion batteries. In addition, it could make the transportation industry greener by decarbonizing shipping, aviation, and trucking.

Multiverse’s future plans for the initiative include increasing the input parameters to create a more realistic quantum digital twin and working with an energy company to validate the digital model, and continue working on the improvement of the quantum solution developed.

❌
❌