Normal view

There are new articles available, click to refresh the page.
Before yesterdayComputing Archives - Singularity Hub

What Is Quantum Advantage? The Moment Extremely Powerful Quantum Computers Will Arrive

Quantum advantage is the milestone the field of quantum computing is fervently working toward, when a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.

The Source of Quantum Computing’s Power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference , and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves—like sound waves or ocean waves—combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

Applications of Quantum Computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure, and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging, and surveillance.

Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks—including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage—in particular in machine learning—remains a critical area of ongoing research.

a metal apparatus with green laser light in the background
A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves. Image Credit: Guoqing Wang, CC BY-NC-ND

Staying Coherent and Overcoming Errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Quantum Advantage Coming Into View

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: xx / xx

A Revolution in Computer Graphics Is Bringing 3D Reality Capture to the Masses

As a weapon of war, destroying cultural heritage sites is a common method by armed invaders to deprive a community of their distinct identity. It was no surprise then, in February of 2022, as Russian troops swept into Ukraine, that historians and cultural heritage specialists braced for the coming destruction. So far in the Russia-Ukraine War, UNESCO has confirmed damage to hundreds of religious and historical buildings and dozens of public monuments, libraries, and museums.

While new technologies like low-cost drones, 3D printing, and private satellite internet may be creating a distinctly 21st century battlefield unfamiliar to conventional armies, another set of technologies is creating new possibilities for citizen archivists off the frontlines to preserve Ukrainian heritage sites.

Backup Ukraine, a collaborative project between the Danish UNESCO National Commission and Polycam, a 3D creation tool, enables anyone equipped with only a phone to scan and capture high-quality, detailed, and photorealistic 3D models of heritage sites, something only possible with expensive and burdensome equipment just a few years ago.

Backup Ukraine is a notable expression of the stunning speed with which 3D capture and graphics technologies are progressing, according to Bilawal Sidhu, a technologist, angel investor, and former Google product manager who worked on 3D maps and AR/VR.

“Reality capture technologies are on a staggering exponential curve of democratization,” he explained to me in an interview for Singularity Hub.

According to Sidhu, generating 3D assets had been possible, but only with expensive tools like DSLR cameras, lidar scanners, and pricey software licenses. As an example, he cited the work of CyArk, a non-profit founded two decades ago with the aim of using professional grade 3D capture technology to preserve cultural heritage around the world.

“What is insane, and what has changed, is today I can do all of that with the iPhone in your pocket,” he says.

In our discussion, Sidhu laid out three distinct yet interrelated technology trends that are driving this progress. First is a drop in cost of the kinds of cameras and sensors which can capture an object or space. Second is a cascade of new techniques which make use of artificial intelligence to construct finished 3D assets. And third is the proliferation of computing power, largely driven by GPUs, capable of rendering graphics-intensive objects on devices widely available to consumers.

Lidar scanners are an example of the price-performance improvement in sensors. First popularized as the bulky spinning sensors on top of autonomous vehicles, and priced in the tens of thousands of dollars, lidar made its consumer-tech debut on the iPhone 12 Pro and Pro Max in 2020. The ability to scan a space in the same way driverless cars see the world meant that suddenly anyone could quickly and cheaply generate detailed 3D assets. This, however, was still only available to the wealthiest Apple customers.

Day 254: hiking in Pinnacles National Park and scanning my daughter as we crossed a small dry creek.

Captured with the iPhone 12 Pro + @Scenario3d. I can’t wait to see these 3D memories 10 years from now.

On @Sketchfab: https://t.co/mvxtOMhzS5#1scanaday #3Dscanning #XR pic.twitter.com/9DX1Ltnmh8

— Emm (@emmanuel_2m) September 14, 2021

One of the industry’s most consequential turning points occurred that same year when researchers at Google introduced neural radiance fields, commonly referred to as NeRFs.

This approach uses machine learning to construct a credible 3D model of an object or space from 2D pictures or video. The neural network “hallucinates” how a full 3D scene would appear, according to Sidhu. It’s a solution to “view synthesis,” a computer graphics challenge seeking to allow someone to see a space from any point of view from only a few source images.

“So that thing came out and everyone realized we’ve now got state-of-the-art view synthesis that works brilliantly for all the stuff photogrammetry has had a hard time with like transparency, translucency, and reflectivity. This is kind of crazy,” he adds.

The computer vision community channeled their excitement into commercial applications. At Google, Sidhu and his team explored using the technology for Immersive View, a 3D version of Google Maps. For the average user, the spread of consumer-friendly applications like Luma AI and others meant that anyone with just a smartphone camera could make photorealistic 3D assets. The creation of high-quality 3D content was no longer limited to Apple’s lidar-elite.

Now, another potentially even more promising method of solving view synthesis is earning attention rivaling that early NeRF excitement. Gaussian splatting is a rendering technique that mimics the way triangles are used for traditional 3D assets, but instead of triangles, it’s a “splat” of color expressed through a mathematical function known as a gaussian. As more gaussians are layered together, a highly detailed and textured 3D asset becomes visible.The speed of adoption for splatting is stunning to watch.

It’s only been a few months but demos are flooding X, and both Luma AI and Polycam are offering tools to generate gaussian splats. Other developers are already working on ways of integrating them into traditional game engines like Unity and Unreal. Splats are also gaining attention from the traditional computer graphics industry since their rendering speed is faster than NeRFs, and they can be edited in ways already familiar to 3D artists. (NeRFs don’t allow this given they’re generated by an indecipherable neural net.)

For a great explanation for how gaussian splatting works and why it’s generating buzz, see this video from Sidhu.

Regardless of the details, for consumers, we are decidedly in a moment where a phone can generate Hollywood-caliber 3D assets that not long ago only well-equipped production teams could produce.

But why does 3D creation even matter at all?

To appreciate the shift toward 3D content, it’s worth noting the technology landscape is orienting toward a future of “spatial computing.” While overused terms like the metaverse might draw eye rolls, the underlying spirit is a recognition that 3D environments, like those used in video games, virtual worlds, and digital twins have a big role to play in our future. 3D assets like the ones produced by NeRFs and splatting are poised to become the content we’ll engage with in the future.

Within this context, a large-scale ambition is the hope for a real-time 3D map of the world. While tools for generating static 3D maps have been available, the challenge remains finding ways of keeping those maps current with an ever-changing world.

“There’s the building of the model of the world, and then there’s maintaining that model of the world. With these methods we’re talking about, I think we might finally have the tech to solve the ‘maintaining the model’ problem through crowdsourcing,” says Sidhu.

Projects like Google’s Immersive View are good early examples of the consumer implications of this. While he wouldn’t speculate when it might eventually be possible, Sidhu agreed that at some point, the technology will exist which would allow a user in VR to walk around anywhere on Earth with a real-time, immersive experience of what is happening there. This type of technology will also spill into efforts in avatar-based “teleportation,” remote meetings, and other social gatherings.

Another reason to be excited, says Sidhu, is 3D memory capture. Apple, for example, is leaning heavily into 3D photo and video for their Vision Pro mixed reality headset. As an example, Sidhu told me he recently created a high-quality replica of his parents’ house before they moved out. He could then give them the experience of walking inside of it using virtual reality.

“Having that visceral feeling of being back there is so powerful. This is why I’m so bullish on Apple, because if they nail this 3D media format, that’s where things can get exciting for regular people.”

i’m convinced the killer use case for 3d reconstruction tech is memory capture

my parents retired earlier this year and i have immortalized their home forever more

photo scanning is legit the most future proof medium we have access to today

scan all the spaces/places/things pic.twitter.com/kmqX5FYaN6

— Bilawal Sidhu (@bilawalsidhu) November 3, 2023

From cave art to oil paintings, the impulse to preserve aspects of our sensory experience is deeply human. Just as photography once muscled in on still lifes as a means of preservation, 3D creation tools seem poised to displace our long-standing affair with 2D images and video.

Yet just as photography can only ever hope to capture a fraction of a moment in time, 3D models can’t fully replace our relationship to the physical world. Still, for those experiencing the horrors of war in Ukraine, perhaps these are welcome developments offering a more immersive way to preserve what can never truly be replaced.

Image Credit: Polycam

Atom Computing Says Its New Quantum Computer Has Over 1,000 Qubits

The scale of quantum computers is growing quickly. In 2022, IBM took the top spot with its 433-qubit Osprey chip. Yesterday, Atom Computing announced they’ve one-upped IBM with a 1,180-qubit neutral atom quantum computer.

The new machine runs on a tiny grid of atoms held in place and manipulated by lasers in a vacuum chamber. The company’s first 100-qubit prototype was a 10-by-10 grid of strontium atoms. The new system is a 35-by-35 grid of ytterbium atoms (shown above). (The machine has space for 1,225 atoms, but Atom has so far run tests with 1,180.)

Quantum computing researchers are working on a range of qubits—the quantum equivalent of bits represented by transistors in traditional computing—including tiny superconducting loops of wire (Google and IBM), trapped ions (IonQ), and photons, among others. But Atom Computing and other companies, like QuEra, believe neutral atoms—that is, atoms with no electric charge—have greater potential to scale.

This is because neutral atoms can maintain their quantum state longer, and they’re naturally abundant and identical. Superconducting qubits are more susceptible to noise and manufacturing flaws. Neutral atoms can also be packed more tightly into the same space as they have no charge that might interfere with neighbors and can be controlled wirelessly. And neutral atoms allow for a room-temperature set-up, as opposed to the near-absolute zero temperatures required by other quantum computers.

The company may be onto something. They’ve now increased the number of qubits in their machine by an order of magnitude in just two years, and believe they can go further. In a video explaining the technology, Atom CEO Rob Hays says they see “a path to scale to millions of qubits in less than a cubic centimeter.”

“We think that the amount of challenge we had to face to go from 100 to 1,000 is probably significantly higher than the amount of challenges we’re gonna face when going to whatever we want to go to next—10,000, 100,000,” Atom cofounder and CTO Ben Bloom told Ars Technica.

But scale isn’t everything.

Quantum computers are extremely finicky. Qubits can be knocked out of quantum states by stray magnetic fields or gas particles. The more this happens, the less reliable the calculations. Whereas scaling got a lot of attention a few years ago, the focus has shifted to error-correction in service of scale. Indeed, Atom Computing’s new computer is bigger, but not necessarily more powerful. The whole thing can’t yet be used to run a single calculation, for example, due to the accumulation of errors as the qubit count rises.

There has been recent movement on this front, however. Earlier this year, the company demonstrated the ability to check for errors mid-calculation and potentially fix those errors without disturbing the calculation itself. They also need to keep errors to a minimum overall by increasing the fidelity of their qubits. Recent papers, each showing encouraging progress in low-error approaches to neutral atom quantum computing, give fresh life to the endeavor. Reducing errors may be, in part, an engineering problem that can be solved with better equipment and design.

“The thing that has held back neutral atoms, until those papers have been published, have just been all the classical stuff we use to control the neutral atoms,” Bloom said. “And what that has essentially shown is that if you can work on the classical stuff—work with engineering firms, work with laser manufacturers (which is something we’re doing)—you can actually push down all that noise. And now all of a sudden, you’re left with this incredibly, incredibly pure quantum system.”

In addition to error-correction in neutral atom quantum computers, IBM announced this year they’ve developed error correction codes for quantum computing that could reduce the number of necessary qubits needed by an order of magnitude.

Still, even with error-correction, large-scale, fault-tolerant quantum computers will need hundreds of thousands or millions of physical qubits. And other challenges—such as how long it takes to move and entangle increasingly large numbers of atoms—exist too. Better understanding and working to solve these challenges is why Atom Computing is chasing scale at the same time as error-correction.

In the meantime, the new machine can be used on smaller problems. Bloom said if a customer is interested in running a 50-qubit algorithm—the company is aiming to offer the computer to partners next year—they’d run it multiple times using the whole computer to arrive at a reliable answer more quickly.

In a field of giants like Google and IBM, it’s impressive a startup has scaled their machines so quickly. But Atom Computing’s 1,000-qubit mark isn’t likely to stand alone for long. IBM is planning to complete its 1,121-qubit Condor chip later this year. The company is also pursuing a modular approach—not unlike the multi-chip processors common in laptops and phones—where scale is achieved by linking many smaller chips.

We’re still in the nascent stages of quantum computing. The machines are useful for research and experimentation but not practical problems. Multiple approaches making progress in scale and error correction—two of the field’s grand challenges—is encouraging. If that momentum continues in the coming years, one of these machines may finally solve the first useful problem that no traditional computer ever could.

Image Credit: Atom Computing

This Brain-Like IBM Chip Could Drastically Cut the Cost of AI

The brain is an exceptionally powerful computing machine. Scientists have long tried to recreate its inner workings in mechanical minds.

A team from IBM may have cracked the code with NorthPole, a fully digital chip that mimics the brain’s structure and efficiency. When pitted against state-of-the-art graphics processing units (GPUs)—the chips most commonly used to run AI programs—IBM’s brain-like chip triumphed in several standard tests, while using up to 96 percent less energy.

IBM is no stranger to brain-inspired chips. From TrueNorth to SpiNNaker, they’ve spent a decade tapping into the brain’s architecture to better run AI algorithms.

Project to project, the goal has been the same: How can we build faster, more energy efficient chips that allow smaller devices—like our phones or computers in self-driving cars—to run AI on the “edge.” Edge computing can monitor and respond to problems in real-time without needing to send requests to remote server farms in the cloud. Like switching from dial-up modems to fiber-optic internet, these chips could also speed up large AI models with minimal energy costs.

The problem? The brain is analog. Traditional computer chips, in contrast, use digital processing—0s and 1s. If you’ve ever tried to convert an old VHS tape into a digital file, you’ll know it’s not a straightforward process. So far, most chips that mimic the brain use analog computing. Unfortunately, these systems are noisy and errors can easily slip through.

With NorthPole, IBM went completely digital. Tightly packing 22 billion transistors onto 256 cores, the chip takes its cues from the brain by placing computing and memory modules next to each other. Faced with a task, each core takes on a part of a problem. However, like nerve fibers in the brain, long-range connections link modules, so they can exchange information too.

This sharing is an “innovation,” said Drs. Subramanian Iyer and Vwani Roychowdhury at the University of California, Los Angeles (UCLA), who were not involved in the study.

The chip is especially relevant in light of increasingly costly, power-hungry AI models. Because NorthPole is fully digital, it also dovetails with existing manufacturing processes—the packaging of transistors and wired connections—potentially making it easier to produce at scale.

The chip represents “neural inference at the frontier of energy, space and time,” the authors wrote in their paper, published in Science.

Mind Versus Machine

From DALL-E to ChatGTP, generative AI has taken the world by storm with its shockingly human-like text-based responses and images.

But to study author Dr. Dharmendra S. Modha, generative AI is on an unsustainable path. The software is trained on billions of examples—often scraped from the web—to generate responses. Both creating the algorithms and running them requires massive amounts of computing power, resulting in high costs, processing delays, and a large carbon footprint.

These popular AI models are loosely inspired by the brain’s inner workings. But they don’t mesh well with our current computers. The brain processes and stores memories in the same location. Computers, in contrast, divide memory and processing into separate blocks. This setup shuttles data back and forth for each computation, and traffic can stack up, causing bottlenecks, delays, and wasted energy.

It’s a “data movement crisis,” wrote the team. We need “dramatically more computationally-efficient methods.”

One idea is to build analog computing chips similar to how the brain functions. Rather than processing data using a system of discrete 0s and 1s—like on-or-off light switches—these chips function more like light dimmers. Because each computing “node” can capture multiple states, this type of computing is faster and more energy efficient.

Unfortunately, analog chips also suffer from errors and noise. Similar to adjusting a switch with a light dimmer, even a slight mistake can alter the output. Although flexible and energy efficient, the chips are difficult to work with when processing large AI models.

A Match Made in Heaven

What if we combined the flexibility of neurons with the reliability of digital processors?

That’s the driving concept for NorthPole. The result is a stamp-sized chip that can beat the best GPUs in several standard tests.

The team’s first step was to distribute data processing across multiple cores, while keeping memory and computing modules inside each core physically close.

Previous analog chips, like IBM’s TrueNorth, used a special material to combine computation and memory in one location. Instead of going analog with non-standard materials, the NorthPole chip places standard memory and processing components next to each other.

The rest of NorthPole’s design borrows from the brain’s larger organization.

The chip has a distributed array of cores like the cortex, the outermost layer of the brain responsible for sensing, reasoning, and decision-making. Each part of the cortex processes different types of information, but it also shares computations and broadcasts results throughout the region.

Inspired by these communication channels, the team built two networks on the chip to democratize memory. Like neurons in the cortex, each core can access computations within itself, but also has access to a global memory. This setup removes hierarchy in data processing, allowing all cores to tackle a problem simultaneously while also sharing their results—thereby eliminating a common bottleneck in computation.

The team also developed software that cleverly delegates a problem in both space and time to each core—making sure no computing resources go to waste or collide with each other.

The software “exploits the full capabilities of the [chip’s] architecture,” they explained in the paper, while helping integrate “existing applications and workflows” into the chip.

Compared to TrueNorth, IBM’s previous brain-inspired analog chip, NorthPole can support AI models that are 640 times larger, involving 3,000 times more computations. All that with just four times the number of transistors.

A Digital Brain Processor

The team next pitted NorthPole against several GPU chips in a series of performance tests.

NorthPole was 25 times more efficient when challenged with the same problem. The chip also processed data at lighting-fast speeds compared to GPUs on two difficult AI benchmark tests.

Based on initial tests, NorthPole is already usable for real-time facial recognition or deciphering language. In theory, its fast response time could also guide self-driving cars in split-second decisions.

Computer chips are at a crossroads. Some experts believe that Moore’s law—which posits that the number of transistors on a chip doubles every two years—is at death’s door. Although still in their infancy, alternative computing structures, such as brain-like hardware and quantum computing, are gaining steam.

But NorthPole shows semiconductor technology still has much to give. Currently, there are 37 million transistors per square millimeter on the chip. But based on projections, the setup could easily expand to two billion, allowing larger algorithms to run on a single chip.

“Architecture trumps Moore’s law,” wrote the team.

They believe innovation in chip design, like NorthPole, could provide near-term solutions in the development of increasingly powerful but resource-hungry AI.

Image Credit: IBM

Quantum Computers in 2023: Where They Are Now and What’s Next

In June, an IBM computing executive claimed quantum computers were entering the “utility” phase, in which high-tech experimental devices become useful. In September, Australia’s chief scientist Cathy Foley went so far as to declare “the dawn of the quantum era.”

This week, Australian physicist Michelle Simmons won the nation’s top science award for her work on developing silicon-based quantum computers.

Obviously, quantum computers are having a moment. But—to step back a little—what exactly are they?

What Is a Quantum Computer?

One way to think about computers is in terms of the kinds of numbers they work with.

The digital computers we use every day rely on whole numbers (or integers), representing information as strings of zeroes and ones which they rearrange according to complicated rules. There are also analog computers, which represent information as continuously varying numbers (or real numbers), manipulated via electrical circuits or spinning rotors or moving fluids.

In the 16th century, the Italian mathematician Girolamo Cardano invented another kind of number called complex numbers to solve seemingly impossible tasks such as finding the square root of a negative number. In the 20th century, with the advent of quantum physics, it turned out complex numbers also naturally describe the fine details of light and matter.

In the 1990s, physics and computer science collided when it was discovered that some problems could be solved much faster with algorithms that work directly with complex numbers as encoded in quantum physics.

The next logical step was to build devices that work with light and matter to do those calculations for us automatically. This was the birth of quantum computing.

Why Does Quantum Computing Matter?

We usually think of the things our computers do in terms that mean something to us— balance my spreadsheet, transmit my live video, find my ride to the airport. However, all of these are ultimately computational problems, phrased in mathematical language.

As quantum computing is still a nascent field, most of the problems we know quantum computers will solve are phrased in abstract mathematics. Some of these will have “real world” applications we can’t yet foresee, but others will find a more immediate impact.

One early application will be cryptography. Quantum computers will be able to crack today’s internet encryption algorithms, so we will need quantum-resistant cryptographic technology. Provably secure cryptography and a fully quantum internet would use quantum computing technology.

A microscopic view of a square, iridescent computer chip against an orange background.
Google has claimed its Sycamore quantum processor can outperform classical computers at certain tasks. Image Credit: Google

In materials science, quantum computers will be able to simulate molecular structures at the atomic scale, making it faster and easier to discover new and interesting materials. This may have significant applications in batteries, pharmaceuticals, fertilizers, and other chemistry-based domains.

Quantum computers will also speed up many difficult optimization problems, where we want to find the “best” way to do something. This will allow us to tackle larger-scale problems in areas such as logistics, finance, and weather forecasting.

Machine learning is another area where quantum computers may accelerate progress. This could happen indirectly, by speeding up subroutines in digital computers, or directly if quantum computers can be reimagined as learning machines.

What Is the Current Landscape?

In 2023, quantum computing is moving out of the basement laboratories of university physics departments and into industrial research and development facilities. The move is backed by the checkbooks of multinational corporations and venture capitalists.

Contemporary quantum computing prototypes—built by IBM, Google, IonQ, Rigetti, and others—are still some way from perfection.

Today’s machines are of modest size and susceptible to errors, in what has been called the “noisy intermediate-scale quantum” phase of development. The delicate nature of tiny quantum systems means they are prone to many sources of error, and correcting these errors is a major technical hurdle.

The holy grail is a large-scale quantum computer which can correct its own errors. A whole ecosystem of research factions and commercial enterprises are pursuing this goal via diverse technological approaches.

Superconductors, Ions, Silicon, Photons

The current leading approach uses loops of electric current inside superconducting circuits to store and manipulate information. This is the technology adopted by Google, IBM, Rigetti, and others.

Another method, the “trapped ion” technology, works with groups of electrically charged atomic particles, using the inherent stability of the particles to reduce errors. This approach has been spearheaded by IonQ and Honeywell.

Illustration showing glowing dots and patterns of light.
An artist’s impression of a semiconductor-based quantum computer. Image Credit: Silicon Quantum Computing

A third route of exploration is to confine electrons within tiny particles of semiconductor material, which could then be melded into the well-established silicon technology of classical computing. Silicon Quantum Computing is pursuing this angle.

Yet another direction is to use individual particles of light (photons), which can be manipulated with high fidelity. A company called PsiQuantum is designing intricate “guided light” circuits to perform quantum computations.

There is no clear winner yet from among these technologies, and it may well be a hybrid approach that ultimately prevails.

Where Will the Quantum Future Take Us?

Attempting to forecast the future of quantum computing today is akin to predicting flying cars and ending up with cameras in our phones instead. Nevertheless, there are a few milestones that many researchers would agree are likely to be reached in the next decade.

Better error correction is a big one. We expect to see a transition from the era of noisy devices to small devices that can sustain computation through active error correction.

Another is the advent of post-quantum cryptography. This means the establishment and adoption of cryptographic standards that can’t easily be broken by quantum computers.

Commercial spin-offs of technology such as quantum sensing are also on the horizon.

The demonstration of a genuine “quantum advantage” will also be a likely development. This means a compelling application where a quantum device is unarguably superior to the digital alternative.

And a stretch goal for the coming decade is the creation of a large-scale quantum computer free of errors (with active error correction).

When this has been achieved, we can be confident the 21st century will be the “quantum era.”

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: A complex cooling rig is needed to maintain the ultracold working temperatures required by a superconducting quantum computer / IBM

Quantum Computers Could Crack Encryption Sooner Than Expected With New Algorithm

By: Edd Gent
2 October 2023 at 14:00

One of the most well-established and disruptive uses for a future quantum computer is the ability to crack encryption. A new algorithm could significantly lower the barrier to achieving this.

Despite all the hype around quantum computing, there are still significant question marks around what quantum computers will actually be useful for. There are hopes they could accelerate everything from optimization processes to machine learning, but how much easier and faster they’ll be remains unclear in many cases.

One thing is pretty certain though: A sufficiently powerful quantum computer could render our leading cryptographic schemes worthless. While the mathematical puzzles underpinning them are virtually unsolvable by classical computers, they would be entirely tractable for a large enough quantum computer. That’s a problem because these schemes secure most of our information online.

The saving grace has been that today’s quantum processors are a long way from the kind of scale required. But according to a report in Science, New York University computer scientist Oded Regev has discovered a new algorithm that could reduce the number of qubits required substantially.

The approach essentially reworks one of the most successful quantum algorithms to date. In 1994, Peter Shor at MIT devised a way to work out which prime numbers need to be multiplied together to give a particular number—a problem known as prime factoring.

For large numbers, this is an incredibly difficult problem that quickly becomes intractable on conventional computers, which is why it was used as the basis for the popular RSA encryption scheme. But by taking advantage of quantum phenomena like superposition and entanglement, Shor’s algorithm can solve these problems even for incredibly large numbers.

That fact has led to no small amount of panic among security experts, not least because hackers and spies can hoover up encrypted data today and then simply wait for the development of sufficiently powerful quantum computers to crack it. And although post-quantum encryption standards have been developed, implementing them across the web could take many years.

It is likely to be quite a long wait though. Most implementations of RSA rely on at least 2048-bit keys, which is equivalent to a number 617 digits long. Fujitsu researchers recently calculated that it would take a completely fault-tolerant quantum computer with 10,000 qubits 104 days to crack a number that large.

However, Regev’s new algorithm, described in a pre-print published on arXiv, could potentially reduce those requirements substantially. Regev has essentially reworked Shor’s algorithm such that it’s possible to find a number’s prime factors using far fewer logical steps. Carrying out operations in a quantum computer involves creating small circuits from a few qubits, known as gates, that perform simple logical operations.

In Shor’s original algorithm, the number of gates required to factor a number is the square of the number of bits used to represent it, which is denoted as n2. Regev’s approach would only require n1.5 gates because it searches for prime factors by carrying out smaller multiplications of many numbers rather than very large multiplications of a single number. It also reduces the number of gates required by using a classical algorithm to further process the outputs.

In the paper, Regev estimates that for a 2048-bit number this could reduce the number of gates required by two to three orders of magnitude. If true, that could enable much smaller quantum computers to crack RSA encryption.

However, there are practical limitations. For a start, Regev notes that Shor’s algorithm benefits from a host of optimizations developed over the years that reduce the number of qubits required to run it. It’s unclear yet whether these optimizations would work on the new approach.

Martin Ekerå, a quantum computing researcher with the Swedish government, also told Science that Regev’s algorithm appears to need quantum memory to store intermediate values. Providing that memory will require extra qubits and eat into any computational advantage it has.

Nonetheless, the new research is a timely reminder that, when it comes to quantum computing’s threat to encryption, the goal posts are constantly moving, and shifting to post-quantum schemes can’t happen fast enough.

Image Credit: Google

The Digital Future May Rely on Optical Switches a Million Times Faster Than Today’s Transistors

If you’ve ever wished you had a faster phone, computer, or internet connection, you’ve encountered the personal experience of hitting a limit of technology. But there might be help on the way.

Over the past several decades, scientists and engineers like me have worked to develop faster transistors, the electronic components underlying modern electronic and digital communications technologies. These efforts have been based on a category of materials called semiconductors that have special electrical properties. Silicon is perhaps the best known example of this type of material.

But about a decade ago, scientific efforts hit the speed limit of semiconductor-based transistors. Researchers simply can’t make electrons move faster through these materials. One way engineers are trying to address the speed limits inherent in moving a current through silicon is to design shorter physical circuits—essentially giving electrons less distance to travel. Increasing the computing power of a chip comes down to increasing the number of transistors. However, even if researchers are able to get transistors to be very small, they won’t be fast enough for the faster processing and data transfer speeds people and businesses will need.

My research group’s work aims to develop faster ways to move data, using ultrafast laser pulses in free space and optical fiber. The laser light travels through optical fiber with almost no loss and with a very low level of noise.

In our most recent study, published in February 2023 in Science Advances, we took a step toward that, demonstrating that it’s possible to use laser-based systems equipped with optical transistors, which depend on photons rather than voltage to move electrons, and to transfer information much more quickly than current systems—and do so more effectively than previously reported optical switches.

Ultrafast Optical Transistors

At their most fundamental level, digital transmissions involve a signal switching on and off to represent ones and zeros. Electronic transistors use voltage to send this signal: When the voltage induces the electrons to flow through the system, they signal a 1; when there are no electrons flowing, that signals a 0. This requires a source to emit the electrons and a receiver to detect them.

Our system of ultrafast optical data transmission is based on light rather than voltage. Our research group is one of many working with optical communication at the transistor level—the building blocks of modern processors—to get around the current limitations with silicon.

Our system controls reflected light to transmit information. When light shines on a piece of glass, most of it passes through, though a little bit might reflect. That is what you experience as glare when driving toward sunlight or looking through a window.

We use two laser beams transmitted from two sources passing through the same piece of glass. One beam is constant, but its transmission through the glass is controlled by the second beam. By using the second beam to shift the properties of the glass from transparent to reflective, we can start and stop the transmission of the constant beam, switching the optical signal from on to off and back again very quickly.

With this method, we can switch the glass properties much more quickly than current systems can send electrons. So we can send many more on and off signals—zeros and ones—in less time.

How Fast Are We Talking?

Our study took the first step to transmitting data 1 million times faster than if we had used the typical electronics. With electrons, the maximum speed for transmitting data is a nanosecond, one-billionth of a second, which is very fast. But the optical switch we constructed was able to transmit data a million times faster, which took just a few hundred attoseconds.

We were also able to transmit those signals securely so that an attacker who tried to intercept or modify the messages would fail or be detected.

Using a laser beam to carry a signal, and adjusting its signal intensity with glass controlled by another laser beam, means the information can travel not only more quickly but also much greater distances.

For instance, the James Webb Space Telescope recently transmitted stunning images from far out in space. These pictures were transferred as data from the telescope to the base station on Earth at a rate of one “on” or “off” every 35 nanoseconds using optical communications.

A laser system like the one we’re developing could speed up the transfer rate a billion-fold, allowing faster and clearer exploration of deep space, more quickly revealing the universe’s secrets. And someday computers themselves might run on light.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The author’s lab’s ultrafast optical switch in action. Mohammed Hassan, University of Arizona, CC BY-ND

Microsoft Wants to Build a Quantum Supercomputer Within a Decade

Since the start of the quantum race, Microsoft has placed its bets on the elusive but potentially game-changing topological qubit. Now the company claims its Hail Mary has paid off, saying it could build a working processor in less than a decade.

Today’s leading quantum computing companies have predominantly focused on qubits—the quantum equivalent of bits—made out of superconducting electronics, trapped ions, or photons. These devices have achieved impressive milestones in recent years, but are hampered by errors that mean a quantum computer able to outperform classical ones still appears some way off.

Microsoft, on the other hand, has long championed topological quantum computing. Rather than encoding information in the states of individual particles, this approach encodes information in the overarching structure of the system. In theory, that should make the devices considerably more tolerant of background noise from the environment and therefore more or less error-proof.

To do so, though, you need to generate an exotic form of quasiparticle called a Majorana zero mode (MZM), which has proven incredibly difficult. A new paper from Microsoft researchers now claims to have achieved the feat, and has laid out a road map for using them to build a computer capable of performing one million quantum operations per second.

It has been an arduous development path in the near term because it required that we make a physics breakthrough that has eluded researchers for decades,” Chetan Nayak from Microsoft wrote in a blog post. “Overcoming many challenges, we’re thrilled to share that a peer-reviewed paper…establishes that Microsoft has achieved the first milestone towards creating a reliable and practical quantum supercomputer.”

Microsoft’s approach to quantum computing builds on an obscure branch of mathematics known as topology. It is used to describe fundamental properties of an object’s shape that don’t change when it is deformed, twisted, or stretched.

The classic example is the fact that topologically, a doughnut and a coffee mug are the same shape because they each have a solitary hole—in the center of the doughnut and the handle of the mug. Taking a bite of the doughnut or chipping the mug don’t change the overarching shape, but cutting the doughnut in half or snapping off the handle would.

The insight for quantum computing is that if you could store information in the topological state of a system, it would be similarly resistant to minor perturbations. That could make it largely immune to the kind of environmental noise that frequently interferes with the fragile quantum states of today’s leading qubit technologies.

Early research identified MZMs—unusual conglomerations of electrons that behave as a single particle—as a promising candidate to create “topological qubits.” In theory, the paths of multiple MZMs can be interwoven to create a topological structure capable of carrying out quantum computation. Each so-called “braid” between a pair of MZMs effectively acts as a logic gate.

But creating these topological qubits has proven incredibly tough. Not only is it tricky to build hardware capable of producing MZMs, it’s very difficult to distinguish them from a variety of similar quantum states that are of no use for building quantum computers. Microsoft actually announced they had detected MZMs in nanowires connected to a superconductor in 2018, but the results had to be retracted in 2021 after other groups couldn’t replicate them.

Now though, the company claims they have proven that they can generate MZMs in similar devices. Microsoft released preliminary results last year, but now the research has been published in the peer-reviewed journal Physical Review B. While the previous retracted study relied on detecting a sudden peak in the wire’s electrical conductance, this time around they used a more rigorous protocol that looked for signatures of MZMs at both ends of the wire.

Nayak told New Scientist that the probability of any device that passed this new test not actually exhibiting an MZM was less than eight percent. Other researchers were less convinced, with several telling New Scientist that the new test still had flaws and that some details in the data suggest the results could be the consequence of other quantum effects.

Nonetheless, Microsoft says the result ticks off the first step in its six-point roadmap to creating a topological quantum supercomputer. Now the company has generated MZMs, the next step is to use them to create topological qubits before stringing many of them together.

While that might seem like a long road, Krysta Svore from Microsoft told Tech Crunch they envision being able to build a full-scale quantum computer capable of one million quantum operations per second within a decade.

But Microsoft isn’t the only one making progress on this front. MZMs fall into a class of quasiparticles called non-Abelian anyons, and they aren’t the only ones that can be used to create a topological quantum computer. In May, both Google and Quantinuum claimed to have shown that their hardware can also generate these anyons.

It remains unclear whether this represents the beginning of a major shift in the quantum computing landscape towards topological approaches. But it is growing evidence that Microsoft’s early gamble on the technology could be about to pay off.

Image Credit: Microsoft Azure

An IBM Quantum Computer Beat a Supercomputer in a Benchmark Test

Quantum computers may soon tackle problems that stump today’s powerful supercomputers—even when riddled with errors.

Computation and accuracy go hand in hand. But a new collaboration between IBM and UC Berkeley showed that perfection isn’t necessarily required for solving challenging problems, from understanding the behavior of magnetic materials to modeling how neural networks behave or how information spreads across social networks.

The teams pitted IBM’s 127-qubit Eagle chip against supercomputers at Lawrence Berkeley National Lab and Purdue University for increasingly complex tasks. With easier calculations, Eagle matched the supercomputers’ results every time—suggesting that even with noise, the quantum computer could generate accurate responses. But where it shone was in its ability to tolerate scale, returning results that are—in theory—far more accurate than what’s possible today with state-of-the-art silicon computer chips.

At the heart is a post-processing technique that decreases noise. Similar to looking at a large painting, the method ignores each brush stroke. Rather, it focuses on small portions of the painting and captures the general “gist” of the artwork.

The study, published in Nature, isn’t chasing quantum advantage, the theory that quantum computers can solve problems faster than conventional computers. Rather, it shows that today’s quantum computers, even when imperfect, may become part of scientific research—and perhaps our lives—sooner than expected. In other words, we’ve now entered the realm of quantum utility.

“The crux of the work is that we can now use all 127 of Eagle’s qubits to run a pretty sizable and deep circuit—and the numbers come out correct,” said Dr. Kristan Temme, principle research staff member and manager for the Theory of Quantum Algorithms group at IBM Quantum.

The Error Terror

The Achilles heel of quantum computers is their errors.

Similar to classic silicon-based computer chips—those running in your phone or laptop—quantum computers use packets of data called bits as the basic method of calculation. What’s different is that in classical computers, bits represent 1 or 0. But thanks to quantum quirks, the quantum equivalent of bits, qubits, exist in a state of flux, with a chance of landing in either position.

This weirdness, along with other attributes, makes it possible for quantum computers to simultaneously compute multiple complex calculations—essentially, everything, everywhere, all at once (wink)—making them, in theory, far more efficient than today’s silicon chips.

Proving the idea is harder.

“The race to show that these processors can outperform their classical counterparts is a difficult one,” said Drs. Göran Wendin and Jonas Bylander at the Chalmers University of Technology in Sweden, who were not involved in the study.

The main trip-up? Errors.

Qubits are finicky things, as are the ways in which they interact with each other. Even minor changes in their state or environment can throw a calculation off track. “Developing the full potential of quantum computers requires devices that can correct their own errors,” said Wendin and Bylander.

The fairy tale ending is a fault-tolerant quantum computer. Here, it’ll have thousands of high-quality qubits similar to “perfect” ones used today in simulated models, all controlled by a self-correcting system.

That fantasy may be decades off. But in the meantime, scientists have settled on an interim solution: error mitigation. The idea is simple: if we can’t eliminate noise, why not accept it? Here, the idea is to measure and tolerate errors while finding methods that compensate for quantum hiccups using post-processing software.

It’s a tough problem. One previous method, dubbed “noisy intermediate-scale quantum computation,” can track errors as they build up and correct them before they corrupt the computational task at hand. But the idea only worked for quantum computers running a few qubits—a solution that doesn’t work for solving useful problems, because they’ll likely require thousands of qubits.

IBM Quantum had another idea. Back in 2017, they published a guiding theory: if we can understand the source of noise in the quantum computing system, then we can eliminate its effects.

The overall idea is a bit unorthodox. Rather than limiting noise, the team deliberately enhanced noise in a quantum computer using a similar technique that controls qubits. This makes it possible to measure results from multiple experiments injected with varying levels of noise, and develop ways to counteract its negative effects.

Back to Zero

In this study, the team generated a model of how noise behaves in the system. With this “noise atlas,” they could better manipulate, amplify, and eliminate the unwanted signals in a predicable way.

Using post-processing software called Zero Noise Extrapolation (ZNE), they extrapolated the measured “noise atlas” to a system without noise—like digitally erasing background hums from a recorded soundtrack.

As a proof of concept, the team turned to a classic mathematical model used to capture complex systems in physics, neuroscience, and social dynamics. Called the 2D Ising model, it was originally developed nearly a century ago to study magnetic materials.

Magnetic objects are a bit like qubits. Imagine a compass. They have a propensity to point north, but can land in any position depending on where you are—determining their ultimate state.

The Ising model mimics a lattice of compasses, in which each one’s spin influences its neighbor’s. Each spin has two states: up or down. Although originally used to describe magnetic properties, the Ising model is now widely used for simulating the behavior of complex systems, such as biological neural networks and social dynamics. It also helps with cleaning up noise in image analysis and bolsters computer vision.

The model is perfect for challenging quantum computers because of its scale. As the number of “compasses” increases, the system’s complexity rises exponentially and quickly outgrows the capability of today’s supercomputers. This makes it a perfect test for pitting quantum and classical computers mano a mano.

An initial test first focused on a small group of spins well within the supercomputers’ capabilities. The results were on the mark for both, providing a benchmark of the Eagle quantum processor’s performance with the error mitigation software. That is, even with errors, the quantum processor provided accurate results similar to those from state-of-the-art supercomputers.

For the next tests, the team stepped up the complexity of the calculations, eventually employing all of Eagle’s 127 qubits and over 60 different steps. At first, the supercomputers, armed with tricks to calculate exact answers, kept up with the quantum computer, pumping out surprisingly similar results.

“The level of agreement between the quantum and classical computations on such large problems was pretty surprising to me personally,” said study author Dr. Andrew Eddins at IBM Quantum.

As the complexity increased, however, classic approximation methods began to falter. The breaking point happened when the team dialed up the qubits to 68 to model the problem. From there, Eagle was able to scale up to its entire 127 qubits, generating answers beyond the capability of the supercomputers.

It’s impossible to certify that the results are completely accurate. However, because Eagle’s performance matched results from the supercomputers—up to the point the latter could no longer hold up—the previous trials suggest the new answers are likely correct.

What’s Next?

The study is still a proof of concept.

Although it shows that the post-processing software, ZNE, can mitigate errors in a 127-qubit system, it’s still unclear if the solution can scale up. With IBM’s 1,121-qubit Condor chip set to release this year—and “utility-scale processors” with up to 4,158 qubits in the pipeline—the error-mitigating strategy may need further testing.

Overall, the method’s strength is in its scale, not its speed. The quantum speed-up was about two to three times faster than classical computers. The strategy also uses a short-term pragmatic approach by pursuing strategies that minimize errors—as opposed to correcting them altogether—as an interim solution to begin utilizing these strange but powerful machines.

These techniques “will drive the development of device technology, control systems, and software by providing applications that could offer useful quantum advantage beyond quantum-computing research—and pave the way for truly fault-tolerant quantum computing,” said Wendin and Bylander. Although still in their early days, they “herald further opportunities for quantum processors to emulate physical systems that are far beyond the reach of conventional computers.”

Image Credit: IBM

Scientists Just Showed How to Make a Quantum Computer Using Sound Waves

A weird and wonderful array of technologies are competing to become the standard-bearer for quantum computing. The latest contender wants to encode quantum information in sound waves.

One thing all quantum computers have in common is the fact that they manipulate information encoded in quantum states. But that’s where the similarities end, because those quantum states can be induced in everything from superconducting circuits to trapped ions, ultra-cooled atoms, photons, and even silicon chips.

While some of these approaches have attracted more investment than others, we’re still a long way from the industry settling on a common platform. And in the world of academic research, experimentation still abounds.

Now, a team from the University of Chicago has taken crucial first steps towards building a quantum computer that can encode information in phonons, the fundamental quantum units that make up sound waves in much the same way that photons make up light beams.

The basic principles of how you could create a “phononic” quantum computer are fairly similar to those used in “photonic” quantum computers. Both involve generating and detecting individual particles, or quasiparticles, and manipulating them using beamsplitters and phase shifters. Phonons are quasiparticles, because although they act like particles as far as quantum mechanics are concerned, they are actually made up of the collective behavior of large numbers of atoms.

The group from Chicago had already demonstrated that they could generate individual phonons using surface acoustic waves, which travel along the surface of a material at frequencies roughly a million times higher than a human can hear, and use them to transfer quantum information between two superconducting qubits.

But in a new paper in Science, the researchers demonstrate the first phononic beamsplitter, which, as the name suggests, is designed to split acoustic waves. This component is a critical ingredient for a phononic quantum computer as it makes it possible to take advantage of quantum phenomena like superposition, entanglement, and interference.

Their setup involves two superconducting qubits fabricated on flat pieces of sapphire, joined together by a channel made of lithium niobate. Each qubit is connected via a tunable coupler to a device called a transducer, which converts electrical signals into mechanical ones.

This is used to generate vibrations that create the individual phonons in the channel connecting the qubits, which features a beamsplitter made of 16 parallel metal fingers in the middle. The entire setup is chilled to just above absolute zero.

To demonstrate the capabilities of their system, the researchers first excited one of the qubits to get it to generate a single phonon. This traveled along the channel to the beamsplitter, but because quantum particles like phonons are fundamentally indivisible, instead of splitting it went into a quantum superposition.

This refers to the ability of a quantum system to be in multiple states simultaneously, until they are measured and collapse down to one of the possibilities. In this case the phonon was both reflected back to the original qubit and transmitted to the second qubit, which were able to capture the phonon and store the quantum superposition.

In a second experiment, the researchers managed to replicate a quantum phenomena that is fundamental to the way logic gates are created in photonic quantum computers called the Hong-Ou-Mandel effect. In optical setups, this involves two identical photons being fed into a beamsplitter from opposite directions simultaneously. Both then enter a superposition, but these outputs interfere with each such that both photons end up traveling together to just one of the detectors.

The researchers showed that they could replicate this effect using phonons, and crucially, that they could use the qubits to alter the characteristics of the phonons so that they could control which direction the output travels in. That’s a crucial first step towards building a practical quantum computer, says Andrew Cleland, who led the study.

The success of the two-phonon interference experiment is the final piece showing that phonons are equivalent to photons,” Cleland said in a press release. “The outcome confirms we have the technology we need to build a linear mechanical quantum computer.”

The researchers concede that the approach is unlikely to directly compete with optical approaches to quantum computing, because the components are much larger and slower. However, their ability to seamlessly interface with superconducting qubits could make them promising for hybrid computing schemes that combine the best of both worlds.

It’s likely to be a long time until the underlying components reach the sophistication and industry-readiness of other quantum approaches. But it seems like the race for quantum advantage has just gotten a little more crowded.

Image Credit: BroneArtUlm / Pixabay

Scientists Merge Biology and Technology by 3D Printing Electronics Inside Living Worms

Finding ways to integrate electronics into living tissue could be crucial for everything from brain implants to new medical technologies. A new approach has shown that it’s possible to 3D print circuits into living worms.

There has been growing interest in finding ways to more closely integrate technology with the human body, in particular when it comes to interfacing electronics with the nervous system. This will be crucial for future brain-machine interfaces and could also be used to treat a host of neurological conditions.

But for the most part, it’s proven difficult to make these kinds of connections in ways that are non-invasive, long-lasting, and effective. The rigid nature of standard electronics means they don’t mix well with the squishy world of biology, and getting them inside the body in the first place can require risky surgical procedures.

A new approach relies instead on laser-based 3D printing to grow flexible, conductive wires inside the body. In a recent paper in Advanced Materials Technologies, researchers showed they could use the approach to produce star- and square-shaped structures inside the bodies of microscopic worms.

“Hypothetically, it will be possible to print quite deep inside the tissue,” John Hardy at Lancaster University, who led the study, told New Scientist. “So, in principle, with a human or other larger organism, you could print around 10 centimeters in.”

The researchers’ approach involves a high-resolution Nanoscribe 3D printer, which fires out an infrared laser that can cure a variety of light-sensitive materials with very high precision. They also created a bespoke ink that includes the conducting polymer polypyrrole, which previous research had shown could be used to electrically stimulate cells in living animals.

To prove the scheme could achieve the primary goal of interfacing with living cells, the researchers first printed circuits into a polymer scaffold and then placed the scaffold on top of a slice of mouse brain tissue being kept alive in a petri dish. They then passed a current through the flexible electronic circuit and showed that it produced the expected response in the mouse brain cells.

The team then decided to demonstrate the approach could be used to print conductive circuits inside a living creature, something that had so far not been achieved. The researchers decided to use the roundworm C. elegans due to its sensitivity to heat, injury, and drying out, which they said would make for a stringent test of how safe the approach is.

First, the team had to adjust their ink to make sure it wasn’t toxic to the animals. They then had to get it inside the worms by mixing it with the bacterial paste they’re fed on.

Once the animals had ingested the ink, they were placed under the Nanoscribe printer, which was used to create square and star shapes a few micrometers across on the worms’ skin and within their guts. The shapes didn’t come out properly in the moving gut though, the researchers admit, due to the fact it was constantly moving.

The shapes printed inside the worms’ bodies had no functionality. But Ivan Minev from the University of Sheffield told New Scientist the approach could one day make it possible to build electronics intertwined with living tissue, though it would still take considerable work before it was applicable in humans.

The authors also admit that adapting the approach for biomedical applications would require significant further research. But in the long run, they believe their work could enable tailor-made brain-machine interfaces for medical purposes, future neuromodulation implants, and virtual reality systems. It could also make it possible to easily repair bioelectronic implants within the body.

All that’s likely still a long way from being realized, but the approach shows the potential of combining 3D printing with flexible, biocompatible electronics to help interface the worlds of biology and technology.

Image Credit: Kbradnam/Wikimedia Commons

This Chipmaking Step Is Crucial to the Future of Computing—and Just Got 40x Faster Thanks to Nvidia

If computer chips make the modern world go around, then Nvidia and TSMC are flywheels keeping it spinning. It’s worth paying attention when the former says they’ve made a chipmaking breakthrough, and the latter confirms they’re about to put it into practice.

At Nvidia’s GTC developer conference this week, CEO Jensen Huang said Nvidia has developed software to make a chipmaking step, called inverse lithography, over 40 times faster. A process that usually takes weeks can now be completed overnight, and instead of requiring some 40,000 CPU servers and 35 megawatts of power, it should only need 500 Nvidia DGX H100 GPU-based systems and 5 megawatts.

“With cuLitho, TSMC can reduce prototype cycle time, increase throughput and reduce the carbon footprint of their manufacturing, and prepare for 2nm and beyond,” he said.

Nvidia partnered with some of the biggest names in the industry on the work. TSMC, the largest chip foundry in the world, plans to qualify the approach in production this summer. Meanwhile, chip designer, Synopsis, and equipment maker, ASML, said in a press release they will integrate cuLitho into their chip design and lithography software.

What Is Inverse Lithography?

To fabricate a modern computer chip, makers shine ultraviolet light through intricate “stencils” to etch billions of patterns—like wires and transistors—onto smooth silicon wafers at near-atomic resolutions. This step, called photolithography, is how every new chip design, from Nvidia to Apple to Intel, is manifested physically in silicon.

The machines that make it happen, built by ASML, cost hundreds of millions of dollars and can produce near-flawless works of nanoscale art on chips. The end product, an example of which is humming away near your fingertips as you read this, is probably the most complex commodity in history. (TSMC churns out a quintillion transistors every six months—for Apple alone.)

To make more powerful chips, with ever-more, ever-smaller transistors, engineers have had to get creative.

Remember that stencil mentioned above? It’s the weirdest stencil you’ve ever seen. Today’s transistors are smaller than the wavelength of light used to etch them. Chipmakers have to use some extremely clever tricks to design stencils—or technically, photomasks—that can bend light into interference patterns whose features are smaller than the light’s wavelength and perfectly match the chip’s design.

Whereas photomasks once had a more one-to-one shape—a rectangle projected a rectangle—they’ve necessarily become more and more complicated over the years. The most advanced masks these days are more like mandalas than simple polygons.

“Stencils” or photomasks have become more and more complicated as the patterns they etch have shrunk into the atomic realm. Image Credit: Nvidia

To design these advanced photomask patterns, engineers reverse the process.

They start with the design they want, then stuff it through a wicked mess of equations describing the physics involved to design a suitable pattern. This step is called inverse lithography, and as the gap between light wavelength and feature size has increased, it’s become increasingly crucial to the whole process. But as the complexity of photomasks increases, so too does the computing power, time, and cost required to design them.

“Computational lithography is the largest computation workload in chip design and manufacturing, consuming tens of billions of CPU hours annually,” Huang said. “Massive data centers run 24/7 to create reticles used in lithography systems.”

In the broader category of computational lithography—the methods used to design photomasks—inverse lithography is one of the newer, more advanced approaches. Its advantages include greater depth of field and resolution and should benefit the entire chip, but due its heavy computational lift, it’s currently only used sparingly.

A Library in Parallel

Nvidia aims to reduce that lift by making the computation more amenable to graphics processing units, or GPUs. These powerful chips are used for tasks with lots of simple computations that can be completed in parallel, like video games and machine learning. So it isn’t just about running existing processes on GPUs, which only yields a modest improvement, but modifying those processes specifically for GPUs.

That’s what the new software, cuLitho, is designed to do. The product, developed over the last four years, is a library of algorithms for the basic operations used in inverse lithography. By breaking inverse lithography down into these smaller, more repetitive computations, the whole process can now be split and parallelized on GPUs. And that, according to Nvidia, significantly speeds everything up.

A new library of inverse lithography algorithms can speed up the process by breaking it down into smaller tasks and running them in parallel on GPUs. Image Credit: Nvidia

“If [inverse lithography] was sped up 40x, would many more people and companies use full-chip ILT on many more layers? I am sure of it,” said Vivek Singh, VP of Nvidia’s Advanced Technology Group, in a talk at GTC.

With a speedier, less computationally hungry process, makers can more rapidly iterate on experimental designs, tweak existing designs, make more photomasks per day, and generally, expand the use of inverse lithography to more of the chip, he said.

This last detail is critical. Wider use of inverse lithography should reduce print errors by sharpening the projected image—meaning chipmakers can churn out more working chips per silicon wafer—and be precise enough to make features at 2 nanometers and beyond.

It turns out making better chips isn’t all about the hardware. Software improvements, like cuLitho or the increased use of machine learning in design, can have a big impact too.

Image Credit: Nvidia

Biocomputing With Mini-Brains as Processors Could Be More Powerful Than Silicon-Based AI

The human brain is a master of computation. It’s no wonder that from brain-inspired algorithms to neuromorphic chips, scientists are borrowing the brain’s playbook to give machines a boost.

Yet the results—in both software and hardware—only capture a fraction of the computational intricacies embedded in neurons. But perhaps the major roadblock in building brain-like computers is that we still don’t fully understand how the brain works. For example, how does its architecture—defined by pre-established layers, regions, and ever-changing neural circuits—make sense of our chaotic world with high efficiency and low energy usage?

So why not sidestep this conundrum and use neural tissue directly as a biocomputer?

This month, a team from Johns Hopkins University laid out a daring blueprint for a new field of computing: organoid intelligence (OI). Don’t worry—they’re not talking about using living human brain tissue hooked up to wires in jars. Rather, as in the name, the focus is on a surrogate: brain organoids, better known as “mini-brains.” These pea-sized nuggets roughly resemble the early fetal human brain in their gene expression, wide variety of brain cells, and organization. Their neural circuits spark with spontaneous activity, ripple with brain waves, and can even detect light and control muscle movement.

In essence, brain organoids are highly-developed processors that duplicate the brain to a limited degree. Theoretically, different types of mini-brains could be hooked up to digital sensors and output devices—not unlike brain-machine interfaces, but as a circuit outside the body. In the long term, they may connect to each other in a super biocomputer trained using biofeedback and machine learning methods to enable “intelligence in a dish.”

Sound a bit creepy? I agree. Scientists have long debated where to draw the line; that is, when the mini-brain becomes too similar to a human one, with the hypothetical nightmare scenario of the nuggets developing consciousness.

The team is well aware. As part of organoid intelligence, they highlight the need for “embedded ethics,” with a consortium of scientists, bioethicists, and the public weighing in throughout development. But to senior author Dr. Thomas Hartung, the time for launching organoid intelligence research is now.

“Biological computing (or biocomputing) could be faster, more efficient, and more powerful than silicon-based computing and AI, and only require a fraction of the energy,” the team wrote.

A Brainy Solution

Using brain tissue as computational hardware may seem bizarre, but there’ve been previous pioneers. In 2022, the Australian company Cortical Labs taught hundreds of thousands of isolated neurons in a dish to play Pong inside a virtual environment. The neurons connected with silicon chips powered by deep learning algorithms into a “synthetic biological intelligence platform” that captured basic neurobiological signs of learning.

Here, the team took the idea a step further. If isolated neurons could already support a rudimentary form of biocomputing, what about 3D mini-brains?

Since their debut a decade ago, mini-brains have become darlings for examining neurodevelopmental disorders such as autism and testing new drug treatments. Often grown from a patient’s skin cells—transformed into induced pluripotent stem cells (iPSCs)—the organoids are especially powerful for mimicking a person’s genetic makeup, including their neural wiring. More recently, human organoids partially restored damaged vision in rats after integrating with their host neurons.

In other words, mini-brains are already building blocks for a plug-and-play biocomputing system that readily connects with biological brains. So why not leverage them as processors for a computer? “The question is: can we learn from and harness the computing capacity of these organoids?” the team asked.

A Hefty Blueprint

Last year, a group of biocomputing experts united in the first organoid intelligence workshop in an effort to form a community tackling the use and implications of mini-brains as biocomputers. The overarching theme, consolidated into “the Baltimore declaration,” was collaboration. A mini-brain system needs several components: devices to detect input, the processor, and a readable output.

In the new paper, Hartung envisions four trajectories to accelerate organoid intelligence.

The first focuses on the critical component: the mini-brain. Although densely packed with brain cells that support learning and memory, organoids are still difficult to culture on a large scale. An early key aim, explained the authors, is scaling up.

Microfluidic systems, which act as “nurseries,” also need to improve. These high-tech bubble baths provide nutrients and oxygen to keep burgeoning mini-brains alive and healthy while removing toxic waste, giving them time to mature. The same system can also pump neurotransmitters—molecules that bridge communication between neurons—into specific regions to modify their growth and behavior.

Scientists can then monitor growth trajectories using a variety of electrodes. Although most are currently tailored for 2D systems, the team and others are leveling up with 3D interfaces specifically designed for organoids, inspired by EEG (electroencephalogram) caps with multiple electrodes placed in a spherical shape.

Then comes the decoding of signals. The second trajectory is all about deciphering the whens and wheres of neural activity inside the mini-brains. When zapped with certain electrical patterns—for example, those that encourage the neurons to play Pong—do they output the expected results?

It’s another hard task; learning changes neural circuits on multiple levels. So what to measure? The team suggests digging into multiple levels, including altered gene expression in neurons and how they connect into neural networks.

Here is where AI and collaboration can make a splash. Biological neural networks are noisy, so multiple trials are needed before “learning” becomes apparent—in turn generating a deluge of data. To the team, machine learning is the perfect tool to extract how different inputs, processed by the mini-brain, transform into outputs. Similar to large-scale neuroscience projects such as the BRAIN Initiative, scientists can share their organoid intelligence research in a community workspace for global collaborations.

Trajectory three is further in the future. With efficient and long-lasting mini-brains and measuring tools in hand, it’s possible to test more complex inputs and see how the stimulation feeds back into the biological processor. For example, does it make its computation more efficient? Different types of organoids—say, those that resemble the cortex and the retina—can be interconnected to build more complex forms of organoid intelligence. These could help “empirically test, explore, and further develop neurocomputational theories of intelligence,” the authors wrote.

Intelligence on Demand?

The fourth trajectory is the one that underlines the entire project: the ethics of using mini-brains for biocomputing.

As brain organoids increasingly resemble the brain—so much so that they can integrate and partially restore a rodent’s injured visual system—scientists are asking if they may gain a sort of awareness.

To be clear, there is no evidence that mini-brains are conscious. But “these concerns will mount during the development of organoid intelligence, as the organoids become structurally more complex, receive inputs, generate outputs, and—at least theoretically—process information about their environment and build a primitive memory,” the authors said. However, the goal of organoid intelligence isn’t to recreate human consciousness—rather, it’s to mimic the brain’s computational functions.

The mini-brain processor is hardly the only ethical concern. Another is cell donation. Because mini-brains retain their donor’s genetic makeup, there’s a chance of selection bias and limitation on neurodiversity.

Then there’s the problem of informed consent. As history with the famous cancer cell line HeLa cells has shown, cell donation can have multi-generational impacts. “What does the organoid exhibit about the cell donor?” the authors asked. Will researchers have an obligation to inform the donor if they discover neurological disorders during their research?

To navigate the “truly uncharted territory,” the team proposes an embedded ethics approach. At each step, bioethicists will collaborate with research teams to map out potential issues iteratively while gathering public opinions. The strategy is similar to other controversial topics, such as genetic editing in humans.

A mini-brain-powered computer is years away. “It will take decades before we achieve the goal of something comparable to any type of computer,” said Hartung. But it’s time to start—launching the program, consolidating multiple technologies across fields, and engaging in ethical discussions.

“Ultimately, we aim toward a revolution in biological computing that could overcome many of the limitations of silicon-based computing and AI and have significant implications worldwide,” the team said.

Image Credit: Jesse Plotkin/Johns Hopkins University

❌
❌