These Physicists Believe Quantum Computers Will Never Work
Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Quantum computing’s advantage depends on scaling entanglement and maintaining coherence, but large-scale entanglement has not been directly measured at the needed scale.
Briefing
Quantum computers may never deliver the promised computational advantage because the physics needed to scale them up—especially sustained entanglement and coherence—could fail under realistic noise and possible new foundations for quantum mechanics. The skepticism is a minority view, but it matters because it targets the core bottleneck: even if small prototypes behave “quantum enough,” there’s no direct experimental evidence that large-scale quantum behavior will persist long enough to run practical algorithms.
Entanglement sits at the center of why quantum machines are expected to outperform conventional computers on certain tasks. Quantum bits (qubits) can exploit correlations that standard computers cannot reproduce, enabling operations that are impossible in classical computation. The business case follows: once devices become large enough, difficult calculations could be performed far faster, turning theoretical speedups into products and services. Yet the criticism begins with an uncomfortable gap—large entanglement has never been measured at the scale required for useful quantum computing, and quantum effects are known to fade as systems grow. The mechanism for that fading remains poorly understood, leaving open the possibility that scaling will break the very assumptions behind quantum advantage.
One line of attack focuses on noise. Mathematician and computer scientist Jill Kalai argues that quantum computers face inevitable noise that prevents them from ever reaching a true advantage over classical machines. Physics professor Robert Aliki similarly contends that when noise is modeled realistically, error correction becomes impossible. Leonid Levven adds a coherence-specific concern: maintaining coherence at sufficiently high precision may be thwarted by tiny, unavoidable disturbances such as those induced by neutrinos or gravitational waves. Notably, these critics are not dismissed as random outsiders; they hold relevant expertise. Still, the skepticism hasn’t become mainstream partly because the arguments lack quantitative predictions that map cleanly onto engineering targets.
A second category of skepticism questions whether quantum mechanics itself is fundamental. Steven Woodfr suggests the world is fundamentally discrete, implying quantum computers won’t outperform classical ones in his framework. Gerard ’s cellular automaton theory also treats quantum physics as step-by-step discrete dynamics, predicting that factoring numbers with millions of digits into prime factors won’t be feasible. Tim Palmer goes further with a quantitative ceiling: if quantum physics is ultimately discrete, quantum computation can’t exceed roughly 500 to 1,000 logical qubits (or “logical cubits” in the transcript). Since many commercial estimates place useful applications around 100 to 150 logical qubits, that would leave only a narrow window.
Finally, modified quantum mechanics models—such as spontaneous localization and Penrose’s collapse model—could impose physical limits on coherence. Spontaneous localization estimates suggest that a device with about a million superconducting qubits would have a decoherence time around a millisecond, potentially spoiling practical algorithms. For Penrose’s collapse model, an estimate cited in the transcript suggests gravitationally induced collapse wouldn’t show up until around 10^18 superconducting qubits or more.
Overall, the skepticism remains a minority position, but it’s framed as worth knowing because fringe ideas in science—like tectonic plate drift and jump theory in earlier eras—can later prove correct. The takeaway is less “quantum computers will fail” than “the path to scaling is unproven,” and multiple theoretical mechanisms could, in principle, close the gap between prototype behavior and real-world advantage.
Cornell Notes
Quantum computing’s promise depends on scaling up entanglement and maintaining coherence long enough to run useful algorithms. The transcript highlights a minority of researchers who think that scaling may fail because large entanglement has never been directly verified and quantum effects tend to diminish as systems get larger. Noise-based critiques argue that unavoidable disturbances make error correction ineffective or prevent a lasting quantum advantage. Other skeptics propose that quantum mechanics may not be fundamental—discrete underlying dynamics or wave-function collapse models could cap how many logical qubits can be used or shorten coherence times. Even without consensus, the arguments matter because they target the engineering and physical assumptions behind quantum advantage.
Why do quantum computers rely on entanglement, and what does that imply for scalability?
What are the main noise-based reasons some researchers think quantum advantage may be impossible?
How do discrete-foundation theories challenge quantum computing’s expected speedups?
What do spontaneous localization and Penrose’s collapse model predict about coherence limits?
Why does the transcript emphasize that skepticism is a minority view?
Review Questions
- Which physical assumption behind quantum advantage is hardest to validate experimentally, and why does that create room for skepticism?
- Compare the noise-based critiques (Kalai, Aliki, Levven) with the discrete-foundation critiques (Woodfr, Gerard , Palmer): what kind of failure mode does each category predict?
- How do spontaneous localization and Penrose’s collapse model differ in their implied device-size thresholds for breaking quantum computation?
Key Points
- 1
Quantum computing’s advantage depends on scaling entanglement and maintaining coherence, but large-scale entanglement has not been directly measured at the needed scale.
- 2
Multiple skeptics argue that realistic noise could prevent a sustained quantum advantage by undermining error correction or coherence.
- 3
Noise-focused critiques include claims of inevitable noise (Jill Kalai), failure of error correction under realistic noise (Robert Aliki), and coherence limits from unavoidable disturbances like neutrinos or gravitational waves (Leonid Levven).
- 4
Some skeptics argue quantum mechanics may not be fundamental; discrete underlying dynamics could cap computational power or make tasks like large-number factoring infeasible.
- 5
Discrete-foundation estimates include a hard ceiling on logical qubits of roughly 500 to 1,000 (Tim Palmer), which would narrow the window for useful applications.
- 6
Modified quantum mechanics models such as spontaneous localization and Penrose’s collapse model introduce physical collapse processes that could shorten coherence times or set effective size thresholds.
- 7
Even as a minority view, the skepticism is treated as worth knowing because key assumptions remain experimentally untested at the scale required for practical quantum advantage.