Get AI summaries of any video or article — Sign up free
These Physicists Believe Quantum Computers Will Never Work thumbnail

These Physicists Believe Quantum Computers Will Never Work

Sabine Hossenfelder·
5 min read

Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Quantum computing’s advantage depends on scaling entanglement and maintaining coherence, but large-scale entanglement has not been directly measured at the needed scale.

Briefing

Quantum computers may never deliver the promised computational advantage because the physics needed to scale them up—especially sustained entanglement and coherence—could fail under realistic noise and possible new foundations for quantum mechanics. The skepticism is a minority view, but it matters because it targets the core bottleneck: even if small prototypes behave “quantum enough,” there’s no direct experimental evidence that large-scale quantum behavior will persist long enough to run practical algorithms.

Entanglement sits at the center of why quantum machines are expected to outperform conventional computers on certain tasks. Quantum bits (qubits) can exploit correlations that standard computers cannot reproduce, enabling operations that are impossible in classical computation. The business case follows: once devices become large enough, difficult calculations could be performed far faster, turning theoretical speedups into products and services. Yet the criticism begins with an uncomfortable gap—large entanglement has never been measured at the scale required for useful quantum computing, and quantum effects are known to fade as systems grow. The mechanism for that fading remains poorly understood, leaving open the possibility that scaling will break the very assumptions behind quantum advantage.

One line of attack focuses on noise. Mathematician and computer scientist Jill Kalai argues that quantum computers face inevitable noise that prevents them from ever reaching a true advantage over classical machines. Physics professor Robert Aliki similarly contends that when noise is modeled realistically, error correction becomes impossible. Leonid Levven adds a coherence-specific concern: maintaining coherence at sufficiently high precision may be thwarted by tiny, unavoidable disturbances such as those induced by neutrinos or gravitational waves. Notably, these critics are not dismissed as random outsiders; they hold relevant expertise. Still, the skepticism hasn’t become mainstream partly because the arguments lack quantitative predictions that map cleanly onto engineering targets.

A second category of skepticism questions whether quantum mechanics itself is fundamental. Steven Woodfr suggests the world is fundamentally discrete, implying quantum computers won’t outperform classical ones in his framework. Gerard ’s cellular automaton theory also treats quantum physics as step-by-step discrete dynamics, predicting that factoring numbers with millions of digits into prime factors won’t be feasible. Tim Palmer goes further with a quantitative ceiling: if quantum physics is ultimately discrete, quantum computation can’t exceed roughly 500 to 1,000 logical qubits (or “logical cubits” in the transcript). Since many commercial estimates place useful applications around 100 to 150 logical qubits, that would leave only a narrow window.

Finally, modified quantum mechanics models—such as spontaneous localization and Penrose’s collapse model—could impose physical limits on coherence. Spontaneous localization estimates suggest that a device with about a million superconducting qubits would have a decoherence time around a millisecond, potentially spoiling practical algorithms. For Penrose’s collapse model, an estimate cited in the transcript suggests gravitationally induced collapse wouldn’t show up until around 10^18 superconducting qubits or more.

Overall, the skepticism remains a minority position, but it’s framed as worth knowing because fringe ideas in science—like tectonic plate drift and jump theory in earlier eras—can later prove correct. The takeaway is less “quantum computers will fail” than “the path to scaling is unproven,” and multiple theoretical mechanisms could, in principle, close the gap between prototype behavior and real-world advantage.

Cornell Notes

Quantum computing’s promise depends on scaling up entanglement and maintaining coherence long enough to run useful algorithms. The transcript highlights a minority of researchers who think that scaling may fail because large entanglement has never been directly verified and quantum effects tend to diminish as systems get larger. Noise-based critiques argue that unavoidable disturbances make error correction ineffective or prevent a lasting quantum advantage. Other skeptics propose that quantum mechanics may not be fundamental—discrete underlying dynamics or wave-function collapse models could cap how many logical qubits can be used or shorten coherence times. Even without consensus, the arguments matter because they target the engineering and physical assumptions behind quantum advantage.

Why do quantum computers rely on entanglement, and what does that imply for scalability?

Quantum computers use quantum bits (qubits) and exploit entanglement—special correlations that classical computers can’t replicate. That entanglement enables certain mathematical operations that standard computation cannot perform. The scalability concern is that large-scale entanglement hasn’t been measured at the sizes needed for practical devices, and quantum effects are known to fade as systems become larger, with the underlying reason still not fully understood.

What are the main noise-based reasons some researchers think quantum advantage may be impossible?

Jill Kalai argues inevitable noise prevents quantum computers from reaching a true advantage. Robert Aliki contends that realistic noise models make error correction impossible. Leonid Levven focuses on coherence: maintaining it at high precision may be impossible due to tiny unavoidable disturbances, including those from neutrinos or gravitational waves. A key caveat raised in the transcript is that these arguments have not produced quantitative predictions that map directly onto engineering feasibility, which limits their influence.

How do discrete-foundation theories challenge quantum computing’s expected speedups?

Steven Woodfr proposes a fundamentally discrete world, predicting quantum computers won’t outperform classical ones in his model. Gerard ’s cellular automaton theory treats quantum physics as discrete step-by-step dynamics and claims factoring numbers with millions of digits into prime factors won’t be possible. Tim Palmer similarly argues discreteness imposes a hard limit, estimating that quantum computation can’t go beyond about 500 to 1,000 logical qubits—only a narrow range above typical commercial estimates of ~100–150 logical qubits for useful applications.

What do spontaneous localization and Penrose’s collapse model predict about coherence limits?

Spontaneous localization treats wave-function collapse as a real physical process. The transcript cites an estimate that a quantum computer with about a million superconducting qubits would have a decoherence time of roughly a millisecond, potentially ruining practical algorithms on large devices. For Penrose’s collapse model, an estimate mentioned suggests gravitationally induced collapse wouldn’t become evident until around 10^18 superconducting qubits or more.

Why does the transcript emphasize that skepticism is a minority view?

Most physicists are portrayed as not believing the skepticism is justified. The transcript also notes that scientific history contains examples where once-fringe ideas later proved correct, so even minority skepticism can be worth tracking—especially when the underlying assumptions (like large entanglement and long coherence) remain experimentally unverified at scale.

Review Questions

  1. Which physical assumption behind quantum advantage is hardest to validate experimentally, and why does that create room for skepticism?
  2. Compare the noise-based critiques (Kalai, Aliki, Levven) with the discrete-foundation critiques (Woodfr, Gerard , Palmer): what kind of failure mode does each category predict?
  3. How do spontaneous localization and Penrose’s collapse model differ in their implied device-size thresholds for breaking quantum computation?

Key Points

  1. 1

    Quantum computing’s advantage depends on scaling entanglement and maintaining coherence, but large-scale entanglement has not been directly measured at the needed scale.

  2. 2

    Multiple skeptics argue that realistic noise could prevent a sustained quantum advantage by undermining error correction or coherence.

  3. 3

    Noise-focused critiques include claims of inevitable noise (Jill Kalai), failure of error correction under realistic noise (Robert Aliki), and coherence limits from unavoidable disturbances like neutrinos or gravitational waves (Leonid Levven).

  4. 4

    Some skeptics argue quantum mechanics may not be fundamental; discrete underlying dynamics could cap computational power or make tasks like large-number factoring infeasible.

  5. 5

    Discrete-foundation estimates include a hard ceiling on logical qubits of roughly 500 to 1,000 (Tim Palmer), which would narrow the window for useful applications.

  6. 6

    Modified quantum mechanics models such as spontaneous localization and Penrose’s collapse model introduce physical collapse processes that could shorten coherence times or set effective size thresholds.

  7. 7

    Even as a minority view, the skepticism is treated as worth knowing because key assumptions remain experimentally untested at the scale required for practical quantum advantage.

Highlights

Large-scale entanglement remains unmeasured, while quantum effects are known to diminish as systems grow—leaving the scaling problem unresolved.
Noise-based skepticism targets error correction and coherence, with claims that inevitable disturbances could block any lasting quantum advantage.
Discrete-foundation theories go beyond “engineering difficulty,” proposing hard limits on logical qubits or the feasibility of factoring very large numbers.
Spontaneous localization estimates suggest millisecond-scale decoherence for devices with about a million superconducting qubits, while Penrose-style collapse might not show up until around 10^18 superconducting qubits.
The transcript frames the skepticism as minority but potentially important, drawing a parallel to once-fringe scientific ideas that later proved correct.

Topics

  • Quantum Computing Skepticism
  • Entanglement Scaling
  • Quantum Noise and Error Correction
  • Discrete Quantum Models
  • Wave-Function Collapse Models

Mentioned

  • Jill Kalai
  • Robert Aliki
  • Leonid Levven
  • Steven Woodfr
  • Gerard
  • Tim Palmer
  • Leonid Levven