Get AI summaries of any video or article — Sign up free
Why This  Nobel Prize Winner Thinks Quantum Mechanics is Nonsense thumbnail

Why This Nobel Prize Winner Thinks Quantum Mechanics is Nonsense

Sabine Hossenfelder·
6 min read

Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

’t Hooft’s alternative aims to restore determinism and locality by removing wave-function collapse as a fundamental mechanism.

Briefing

Gerard ’t Hooft’s alternative to standard quantum mechanics replaces probabilistic measurement outcomes with a fully deterministic framework—at the cost of introducing “superdeterminism,” where experimental choices are correlated with the system being measured from the universe’s earliest conditions. The central dispute is not whether quantum experiments violate Bell-type inequalities (they do), but what assumptions must be true for those inequalities to hold. ’t Hooft argues that one key assumption—free, independent choice of measurement settings—cannot survive a deterministic, local theory.

In conventional quantum mechanics, systems are described by a wave function (often written as ψ) that encodes probabilities rather than definite properties. Measurement forces an update commonly called “collapse,” which effectively happens everywhere at once, a feature that bothered Einstein because it appears to evade locality. ’t Hooft’s route back to determinism is to remove collapse as a fundamental ingredient: the outcomes are fixed by underlying laws, but observers can’t predict them because they lack information about the relevant initial state.

That determinism collides with Bell’s theorem only if measurement settings are treated as freely chosen and statistically independent of the hidden variables governing the system. Bell’s theorem constrains correlations in any theory that is both local and deterministic (plus extra assumptions). Experiments violate the resulting inequality, leading most physicists to conclude that no local deterministic theory can match reality. ’t Hooft’s counter is that the “extra assumptions” fail: in a deterministic universe, what experimenters choose is also determined, and therefore the measurement settings can be correlated with the system’s state. He emphasizes that this is not causal influence in the usual sense—rather, the correlations are already encoded in the initial conditions and must always line up with the measurement settings.

To ’t Hooft, the deeper issue is interpretational. The wave function used in standard quantum mechanics is a predictive tool, not the real state of affairs. He distinguishes between the usual wave function and an “ontological” wave function that is treated as real and yields 100% probabilities for specific outcomes. In this picture, classical-looking states—like a dead cat or a life cat—are real, while superposed combinations (dead-and-alive) are not.

The proposal is tied to a specific underlying model: the cellular automaton interpretation. The universe is imagined as built from extremely small, discrete, deterministic components evolving in discrete time steps with nearest-neighbor interactions, with emergent particles and even space arising from these local rules. ’t Hooft links this to the Planck scale (around 10^-33 cm) and draws a practical implication for quantum computing: because the ontological states available in such a model are fewer than standard quantum theory assumes, certain tasks—he singles out factoring large numbers—would not be achievable by quantum computers in the way current theory predicts. If engineers ever succeed at factoring million-digit numbers using quantum computers, he suggests the cellular automaton framework would be falsified.

The critique offered alongside the summary is that the cellular automaton component may be underdeveloped relative to the symmetry demands of Einstein’s theories, and that the ontological states—central to making the interpretation operational—are not specified in a way that lets outsiders determine what they are or how to use them. Still, the deterministic, local, superdeterministic reinterpretation remains a provocative attempt to reconcile quantum predictions with a classical-style reality—one that has largely been ignored despite ’t Hooft’s credentials and Nobel-level impact in particle physics.

Cornell Notes

Gerard ’t Hooft argues for a deterministic, local foundation beneath quantum mechanics by eliminating wave-function collapse as a fundamental process. In his view, standard quantum mechanics uses a wave function that is not the real state; it only encodes probabilities because observers lack information. The price is “superdeterminism”: measurement settings are correlated with the system’s state through initial conditions, undermining the “free choice” assumption used in Bell’s theorem arguments. ’t Hooft connects this to a cellular automaton interpretation where the universe evolves via discrete, nearest-neighbor rules at the Planck scale. He further claims quantum computers would eventually hit limits—specifically, factoring million-digit numbers would be impossible—so successful large-scale factoring would challenge his framework.

How does ’t Hooft’s deterministic quantum mechanics differ from standard quantum mechanics’ probabilistic wave-function collapse?

Standard quantum mechanics treats ψ as a state that yields probabilities for measurement outcomes; when a measurement occurs, the wave function is updated via “collapse” (or reduction), effectively producing definite results from probabilistic ones. ’t Hooft’s approach removes collapse as fundamental by insisting the outcomes are already fixed by deterministic laws. The randomness then comes from ignorance of the relevant initial conditions, not from indeterminism in nature.

Why does Bell’s theorem become a focal point, and what role does “free choice” play in the usual argument?

Bell’s theorem constrains correlations predicted by any theory that is local and deterministic, provided additional assumptions hold. One crucial assumption is that experimenters can freely choose measurement settings, meaning those choices are independent of the hidden variables governing the system. Experiments violate Bell inequalities, which typically leads to the conclusion that local deterministic theories cannot work. ’t Hooft challenges the independence/free-choice assumption rather than locality or determinism.

What is “superdeterminism” in ’t Hooft’s framing?

Superdeterminism is the claim that, in a fully deterministic universe, the experimenters’ measurement choices are also determined. Therefore, the measurement settings can be correlated with the system’s hidden state. ’t Hooft stresses this is not ordinary causal signaling where one side influences the other; instead, the correlations are already present in the initial state of the universe and must match whatever settings are later selected.

What does ’t Hooft mean by an “ontological” wave function, and how does it change what counts as real?

’t Hooft distinguishes the wave function used for prediction in standard quantum mechanics from an ontological wave function that is treated as real. The ontological wave function assigns 100% probability to particular outcomes, so classical-like states are real (e.g., a dead cat is real and a life cat is real), while superpositions like dead-and-alive are not real in his framework. The usual ψ then becomes a bookkeeping device for probabilities when the observer lacks information about the real underlying state.

How does the cellular automaton interpretation connect to locality and to quantum-computing limits?

The cellular automaton interpretation imagines the universe as discrete deterministic dynamics with nearest-neighbor interactions, designed to avoid faster-than-light influences. ’t Hooft portrays particles and space as emergent from extremely tiny automaton elements at roughly the Planck scale (~10^-33 cm). He claims the available ontological states in this model are fewer than standard quantum theory assumes, implying quantum computers would eventually underperform for tasks like factoring million-digit numbers. He suggests that if such factoring becomes feasible, the cellular automaton theory would be falsified.

What practical criticisms are raised about the proposal’s completeness and usability?

A key criticism is that the cellular automaton component may struggle with the symmetry requirements of Einstein’s theories when discretization is introduced, and that the ontological states are not specified in a way that outsiders can identify or compute. Without a clear method to determine what the ontological states are, the interpretation may be hard to apply beyond philosophical reinterpretation.

Review Questions

  1. What assumption in the standard Bell-theorem reasoning does ’t Hooft target, and how does superdeterminism undermine it?
  2. Explain the difference between the usual predictive wave function and ’t Hooft’s ontological wave function, including what each assigns to measurement outcomes.
  3. What specific claim does ’t Hooft make about quantum computers, and what experimental success would challenge his cellular automaton framework?

Key Points

  1. 1

    ’t Hooft’s alternative aims to restore determinism and locality by removing wave-function collapse as a fundamental mechanism.

  2. 2

    Standard quantum mechanics’ ψ is treated as a probability-calculation tool rather than the real state of the system.

  3. 3

    Bell inequality violations are accepted, but the “free choice”/independence assumption behind the usual Bell argument is challenged.

  4. 4

    Superdeterminism holds that measurement settings are correlated with system states through initial conditions, not via direct causal influence.

  5. 5

    The cellular automaton interpretation proposes a discrete, nearest-neighbor deterministic substrate at roughly the Planck scale from which particles and space emerge.

  6. 6

    ’t Hooft predicts limits for quantum computing, including the claim that factoring million-digit numbers would be impossible in his framework.

  7. 7

    Critics argue the cellular automaton discretization may conflict with Einsteinian symmetries and that ontological states are not made operationally clear.

Highlights

’t Hooft reframes Bell’s theorem as a dispute about assumptions—especially whether measurement settings can be treated as freely chosen and independent of hidden variables.
Superdeterminism replaces “free choice” with a deterministic correlation: the settings experimenters pick are already encoded in the universe’s initial conditions.
The ontological wave function is presented as real and outcome-determining (100% for specific results), while the usual ψ is reinterpreted as merely probabilistic prediction.
A concrete, testable-sounding implication is offered: quantum computers would eventually fail at tasks like factoring million-digit numbers if the cellular automaton picture is right.

Topics