Researchers at the University of Southern California announced in late 2024 what they describe as the first unconditional, exponential quantum speedup for a practical computational problem, a development that experts say could reshape the long-running debate over whether quantum machines can deliver tangible advantages over classical computers in the near term. The team, led by physicist Daniel Lidar, published their findings demonstrating that a quantum algorithm running on IBM’s Eagle and Heron processors solved a variant of Simon’s problem dramatically faster than any known classical method — and crucially, they argue, the speedup does not rely on unproven mathematical conjectures.
What the Researchers Did
The work, published in Physical Review X and discussed widely across the computing community, focuses on a refined version of Simon’s problem, a foundational benchmark in quantum information theory introduced by Daniel Simon in 1994. Simon’s problem famously inspired Peter Shor’s celebrated factoring algorithm, which threatens modern cryptography. While Simon’s original construction proved a quantum advantage in an idealized “oracle” model, Lidar’s group demonstrated that even with realistic hardware noise — and after applying error mitigation and dynamical decoupling techniques — the quantum advantage survives.
According to USC‘s announcement of the research, the team executed the algorithm on 127-qubit and 133-qubit IBM superconducting processors. The classical comparison required exponentially more steps as the problem size grew, while the quantum runtime scaled only polynomially. “The quantum advantage we found is not based on any unproven assumptions,” Lidar said in the university statement, distinguishing the result from earlier “quantum supremacy” claims that depended on conjectures about classical hardness.
Why This Matters
For more than a decade, the field has been haunted by a recurring pattern: a quantum computing milestone is announced, followed weeks or months later by a classical algorithm that matches or surpasses it. Google’s 2019 supremacy claim using its Sycamore processor, for instance, was later challenged by improved classical simulations from teams in China and at the Flatiron Institute. The Chinese Jiuzhang photonic experiment faced similar scrutiny.
What makes the USC result notable, independent commentators have argued, is its mathematical rigor. The speedup is “unconditional” in the sense that it does not depend on conjectures like P ≠ NP or assumptions about the difficulty of simulating random circuits. Instead, it leverages the structure of Simon’s problem, where the gap between quantum and classical query complexity can be proven exactly. Scott Aaronson, a leading theoretical computer scientist at the University of Texas at Austin who has written extensively about quantum complexity on his blog Shtetl-Optimized, has long emphasized that distinguishing genuine quantum advantage from artifact requires precisely this kind of provable separation.
The Broader Stakes
The implications extend beyond academic bragging rights. Governments and private firms have collectively poured tens of billions of dollars into quantum research, with the United States, China, and the European Union each running national quantum initiatives. Companies including IBM, Google, IonQ, and Quantinuum are racing to scale qubit counts while reducing error rates. A demonstrably useful quantum advantage — even on a contrived problem — provides important validation that the technology is on a meaningful trajectory.
Skeptics caution, however, that Simon-style problems are still far from the commercially valuable applications that have been promised in chemistry simulation, materials science, optimization, and machine learning. The current result is a proof of principle on a structured oracle problem, not a breakthrough in factoring large numbers or designing pharmaceuticals. Translating provable speedups into economic value remains an open challenge, and most experts agree that fault-tolerant quantum computing — requiring millions of physical qubits — is still years, possibly more than a decade, away.
What to Watch Next
The next benchmarks to monitor include IBM’s planned scaling roadmap toward its Kookaburra and Blue Jay processors, Google’s progress on logical qubits following its December 2024 announcement that error rates fall as code distance grows, and continued efforts by classical algorithm designers to find clever workarounds. If the USC result holds up under scrutiny — and if similar provable advantages emerge for problems with industrial relevance — 2025 may be remembered as the year quantum computing graduated from speculative promise to demonstrable, if narrow, technological reality. For now, the burden has shifted: classical computer scientists must either accept the speedup or find a flaw, and the mathematics so far appears to be on the quantum side.


