Scientists Solve Longstanding Quantum Error Problem Unlocking New Path
An illustration of how quantum error correction problems can be mapped onto an Ising spin-glass model. (a) The original syndrome measurement circuit. (b) The detector error graph with circuit-level noise. (c) The corresponding Ising spin-glass model. Credit: Hanyan Cao and Dongyang Feng. In a quiet revolution that may reshape the foundation of quantum computing, scientists have solved a key problem in quantum error correction that was once thought to be fundamentally unsolvable.
The breakthrough centers on decoding—the process of identifying and correcting errors in fragile quantum systems—and could accelerate our path toward practical, fault-tolerant quantum computers. The team, led by researchers from the Singapore University of Technology and Design, the Chinese Academy of Sciences, and the Beijing Academy of Quantum Information Sciences (BAQIS), has developed a new algorithm known as... This isn’t just a modest improvement—PLANAR achieved a 25% reduction in logical error rates when applied to Google Quantum AI’s experimental data, rewriting what experts thought they knew about hardware limits. What makes this achievement so significant is that it challenges a long-standing assumption in the field: that a portion of errors—called the “error floor”—was intrinsic to the hardware. Instead, PLANAR reveals that a quarter of those errors were algorithmic, not physical, caused by limitations in the decoding methods rather than the quantum devices themselves. This insight not only breathes new hope into the quest for scalable quantum computing—it redefines what we thought was possible.
As of January 1, 2026, the landscape of quantum computing has been fundamentally reshaped by a singular breakthrough in artificial intelligence: the AlphaQubit decoder. Developed by Google DeepMind in collaboration with the Google Quantum AI team at Alphabet Inc. (NASDAQ:GOOGL), AlphaQubit has effectively bridged the gap between theoretical quantum potential and practical, fault-tolerant reality. By utilizing a sophisticated neural network to identify and correct the subatomic "noise" that plagues quantum processors, AlphaQubit has solved the "decoding problem"—a hurdle that many experts believed would take another decade to clear. The immediate significance of this development cannot be overstated. Throughout 2025, AlphaQubit moved from a research paper in Nature to a core component of Google’s latest quantum hardware, the 105-qubit "Willow" processor.
For the first time, researchers have demonstrated that a quantum system can become more stable as it scales, rather than more fragile. This achievement marks the end of the "Noisy Intermediate-Scale Quantum" (NISQ) era and the beginning of the age of reliable, error-corrected quantum computation. At its core, AlphaQubit is a specialized recurrent transformer—a cousin to the architectures that power modern large language models—re-engineered for the hyper-fast, probabilistic world of quantum mechanics. Unlike traditional decoders such as Minimum-Weight Perfect Matching (MWPM), which rely on rigid, human-coded algorithms to guess where errors occur, AlphaQubit learns the "noise fingerprint" of the hardware itself. It processes a continuous stream of "syndromes" (error signals) and, crucially, utilizes "soft readouts." While previous decoders discarded analog data to work with binary 0s and 1s, AlphaQubit retains the nuanced probability values of... Technical specifications from 2025 benchmarks on the Willow processor reveal the extent of this advantage.
AlphaQubit achieved a 30% reduction in errors compared to the best traditional algorithmic decoders. More importantly, it demonstrated a scaling factor of 2.14x—meaning that for every step up in the "distance" of the error-correcting code (from distance 3 to 5 to 7), the logical error rate dropped exponentially. This is a practical validation of the "Threshold Theorem," the holy grail of quantum physics which suggests that if error rates are kept below a certain level, quantum computers can be made arbitrarily large... Initial reactions from the research community have been transformative. While early critics in late 2024 pointed to the "latency bottleneck"—the idea that AI models were too slow to correct errors in real-time—Google’s 2025 integration of AlphaQubit into custom ASIC (Application-Specific Integrated Circuit) controllers... By moving the AI inference directly onto the hardware controllers, Google has achieved real-time decoding at the microsecond speeds required for superconducting qubits, a feat that was once considered computationally impossible.
Our mission is to build quantum computing for otherwise unsolvable problems. Marking a key step toward real-world applications, we've published a new breakthrough algorithm on our Willow quantum processor, Quantum Echoes, which demonstrates the first-ever verifiable quantum advantage. Willow, Google Quantum AI's latest state-of-the-art quantum chip, is a big step towards developing a large-scale, error-corrected quantum computer. Read the blog and watch the video to learn more about Willow and its breakthrough achievements. View published research, blog posts, and educational resources from the Quantum AI team. Microsoft scientists developed a 4D geometric coding method that reduces errors 1,000-fold in quantum computers.
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Computer scientists say they’ve cracked the science behind error-correction in quantum computers thanks to new "4D codes." Developed by Microsoft, the new codes were revealed in a blog post published June 19 and purport to address the problem of fault tolerance — arguably quantum computing’s biggest bottleneck. All computers can produce errors. In classical computing, error correction is achieved by making multiple copies of every bit of information that’s sent.
If one or more bits are lost or corrupted, the remaining bits still contain the original information. Nature Reviews Physics volume 6, pages 160–161 (2024)Cite this article Over the past few years, and most notably in 2023, quantum error correction has made big strides, shifting the community focus from noisy applications to what can be achieved with early error-corrected quantum computers. But despite the breakthroughs in experiments with trapped ions, superconducting circuits and reconfigurable atom arrays there are still several technological challenges — unique to each platform — to overcome. In superconducting qubits the suppression of logical error with increasing code size has been reported. In reconfigurable atom arrays fault-tolerant logic over hundreds of physical qubits has been demonstrated.
Theory improvements to make QEC more hardware-efficient and first real-time decoders fast enough for all qubit types.
People Also Search
- Scientists Solve Longstanding Quantum Error Problem Unlocking New Path ...
- Physicists unveil system to solve long-standing barrier to new ...
- New quantum error correction code could handle millions of qubits
- The Error Correction Breakthrough: How Google DeepMind's AlphaQubit is ...
- China Demonstrates Quantum Error Correction Using Microwaves, Narrowing ...
- Google Quantum AI
- 'Reliable quantum computing is here': Novel approach to error ...
- A series of fast-paced advances in Quantum Error Correction
- Advancements in Quantum Error Correction and Minimum Distance ...
- From spin glasses to quantum codes: Researchers develop optimal error ...
An Illustration Of How Quantum Error Correction Problems Can Be
An illustration of how quantum error correction problems can be mapped onto an Ising spin-glass model. (a) The original syndrome measurement circuit. (b) The detector error graph with circuit-level noise. (c) The corresponding Ising spin-glass model. Credit: Hanyan Cao and Dongyang Feng. In a quiet revolution that may reshape the foundation of quantum computing, scientists have solved a key proble...
The Breakthrough Centers On Decoding—the Process Of Identifying And Correcting
The breakthrough centers on decoding—the process of identifying and correcting errors in fragile quantum systems—and could accelerate our path toward practical, fault-tolerant quantum computers. The team, led by researchers from the Singapore University of Technology and Design, the Chinese Academy of Sciences, and the Beijing Academy of Quantum Information Sciences (BAQIS), has developed a new al...
As Of January 1, 2026, The Landscape Of Quantum Computing
As of January 1, 2026, the landscape of quantum computing has been fundamentally reshaped by a singular breakthrough in artificial intelligence: the AlphaQubit decoder. Developed by Google DeepMind in collaboration with the Google Quantum AI team at Alphabet Inc. (NASDAQ:GOOGL), AlphaQubit has effectively bridged the gap between theoretical quantum potential and practical, fault-tolerant reality. ...
For The First Time, Researchers Have Demonstrated That A Quantum
For the first time, researchers have demonstrated that a quantum system can become more stable as it scales, rather than more fragile. This achievement marks the end of the "Noisy Intermediate-Scale Quantum" (NISQ) era and the beginning of the age of reliable, error-corrected quantum computation. At its core, AlphaQubit is a specialized recurrent transformer—a cousin to the architectures that powe...
AlphaQubit Achieved A 30% Reduction In Errors Compared To The
AlphaQubit achieved a 30% reduction in errors compared to the best traditional algorithmic decoders. More importantly, it demonstrated a scaling factor of 2.14x—meaning that for every step up in the "distance" of the error-correcting code (from distance 3 to 5 to 7), the logical error rate dropped exponentially. This is a practical validation of the "Threshold Theorem," the holy grail of quantum p...