Google And IBM Latest To Claim Quantum Race Is Nearing End

Aim is for full-scale systems by decade's end.

Tech giants claims that the decades-long race to build a workable quantum computer may finally be entering its home stretch.

A series of recent breakthroughs has convinced industry leaders Google and IBM that full-scale, industrial-grade quantum machines could be within reach before 2030.

In June, IBM became the latest to claim a clear path to the prize, unveiling a new blueprint that it says fills in critical gaps left in earlier designs. The company believes its quantum computers could solve problems far beyond the reach of today's classical machines, with potential applications in fields such as AI and materials science.

"It doesn't feel like a dream anymore," Jay Gambetta, head of IBM's quantum initiative, told the FT.

"I really do feel like we've cracked the code and we'll be able to build this machine by the end of the decade."

Google, which last year cleared what it called one of the largest remaining scientific hurdles, is equally confident.

"All the [remaining] engineering and scientific challenges are surmountable," said Julian Kelly, head of hardware at Google Quantum AI.

Both companies are now shifting from solving the hardest scientific puzzles to tackling more "routine-sounding" but still formidable engineering tasks needed to scale the technology.

However, not everyone is convinced the finish line is so close.

Oskar Painter, head of quantum hardware at AWS, predicts that a truly useful quantum computer may still be 15 to 30 years away, warning that the industrialization process should not be underestimated.

Currently, leading experimental systems use fewer than 200 qubits but reaching industrial scale will require around 1 million or more. Scaling brings unique challenges, especially the instability of qubits, which maintain their quantum states for only fractions of a second. This leads to "noise" as systems grow in size, undermining performance.

IBM's own Condor chip, which expanded to 433 qubits, suffered from "crosstalk" interference between components. IBM says it anticipated the problem and has since adopted a new type of coupler to reduce susceptibility.

In early systems, individual qubits were fine-tuned to boost performance, an approach far too costly and complex for large-scale machines. Both Google and IBM aim to create more reliable, mass-manufacturable components, with Google targeting a tenfold cost reduction to keep a full-scale machine's price near $1 billion.

Error correction is another critical frontier. This technique copies data across multiple qubits to create redundancy, allowing systems to tolerate imperfections. So far, only Google has demonstrated a chip where error correction improves as size increases.

IBM is taking a different route, favoring a "low-density parity-check" code that it says needs 90 percent fewer qubits than Google's "surface code" method. However, it relies on longer connections between distant qubits – a difficult feat that Google's Kelly says adds complexity to already hard-to-control systems. IBM claims to have achieved this milestone last month.

Following forecasts that the first fully operational quantum computer could emerge by the decade's end, Chris Erven, CEO and co-founder of KETS Quantum Security, has warned that such technology in the hands of a hostile state could threaten critical infrastructure and sensitive information.

"Quantum computing threatens to break the cryptographic safeguards that secure everything from banking passwords to encrypted communications across hospitals, energy networks, and military systems," Erven said.

"Ignoring this risk puts public and private data security in jeopardy. The time has come for quantum-safe technology to be the heart of cybersecurity strategies globally. It must be embedded into the communications systems used across the world. Only by doing so will we truly be ready for the impending arrival of Q-Day."

This article originally appeared on our sister site Computing.