Get AI summaries of any video or article — Sign up free
These physicists think we're totally wrong about entropy thumbnail

These physicists think we're totally wrong about entropy

Sabine Hossenfelder·
5 min read

Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Boltzmann brains arise from the idea that, even at very high entropy, spontaneous fluctuations can temporarily form large, structured assemblies like brains.

Briefing

A new line of argument attacks the usual status of the second law of thermodynamics by tying it to the “Boltzmann brain” problem—an unsettling scenario in which random fluctuations at extremely late times could produce self-aware brains far more often than ordinary observers in a long-lived universe. The core claim is that, once the logic is traced carefully, the second law may not be a standalone law of nature at all, but a circular inference: entropy is expected to rise because the past is assumed to have had low entropy, and that low-entropy past is trusted largely because we have records consistent with it—records that themselves rely on the same entropy-increase assumption.

The Boltzmann brain problem starts with the second law’s implication that entropy in an isolated system tends to increase, driving the universe toward more mixed, featureless states. Yet high-entropy states are not perfectly smooth. They exhibit spontaneous fluctuations, meaning small, temporary clumps of matter can appear by chance. Over sufficiently long times, those fluctuations can become arbitrarily large: not just molecules or cells, but even entire brains. In that picture, the universe we experience could itself be one such fluctuation—an entropy “bubble” in which entropy is rising again. The most disturbing twist is probabilistic: smaller fluctuations are more common, so it becomes vastly more likely that observers are brief, self-contained “brains” arising from fluctuations rather than products of a structured cosmic history.

The new paper revisits whether this conclusion can be avoided using established physics. It scrutinizes assumptions Boltzmann originally relied on—especially the idea that particles behave like a random gas that only “bump” into each other. Real physics, including the Standard Model, involves strong clustering: quarks don’t appear alone but in bound states such as protons and neutrons, and gravity further promotes clumping. Even after incorporating those effects, the authors conclude there is still no fully rigorous argument that rules out Boltzmann brains using only established physics.

From there, the argument turns to the second law itself. The usual justification runs in a loop: entropy increases, so past records are reliable; but the expectation that entropy increases is grounded in the assumption that entropy was lower in the past. The paper argues there is no clean logical break in that chain. It even proposes an alternative that is presented as equally legitimate: instead of assuming entropy was minimal at the big bang, assume it was minimal 1,000 years ago. Under that “1,00 CE hypothesis,” entropy would have been higher before that point, implying time effectively ran in the opposite direction until the minimum, then flipped. If that kind of reversal is logically on par with the big-bang low-entropy assumption, then the second law’s usual “arrow of time” justification becomes suspect.

The takeaway is not that entropy demonstrably decreases, but that the foundations behind why it should—especially the low-entropy boundary condition—may be underdetermined by logic alone. The result is a challenge to how confidently physics can claim the universe’s observed time direction without smuggling in the very conclusion it seeks to justify.

Cornell Notes

The Boltzmann brain problem asks what happens if the universe lasts long enough under the second law: entropy tends to increase, but high-entropy states still fluctuate. Those fluctuations can occasionally form large, temporary structures—potentially even brains—so observers might be far more likely to be brief “brains in a void” than products of a structured cosmic history. A new paper argues that even using established physics (including Standard Model clustering and gravity) there is no fully rigorous proof that Boltzmann brains are impossible. It then presses a deeper point: the second law’s usual justification depends on assuming low entropy in the past, but that assumption is supported by records whose reliability is itself tied to entropy increasing. The paper claims there’s no logical escape from that circularity, even when alternative low-entropy boundary conditions are considered.

What exactly is the Boltzmann brain problem, and why do entropy fluctuations matter?

Entropy increase pushes an isolated system toward more mixed, featureless states, which would seem to make life impossible. But high-entropy states aren’t perfectly uniform: they contain spontaneous fluctuations. Those fluctuations can create temporary clumps of matter—small ones are common, larger ones are rarer, but given enough time, arbitrarily large fluctuations can occur. The “Boltzmann brain” is the extreme case: a sufficiently large fluctuation could assemble something brain-like (and then it would quickly disappear as the system returns to higher entropy.

Why does the argument suggest it’s more likely to be a fluctuation-produced brain than a brain in a long-lived universe?

The scenario treats the universe as existing for an extremely long time, so fluctuations happen repeatedly. Smaller fluctuations occur more frequently than larger ones, and the probability distribution favors the more common events. Because a self-aware brain only needs to appear as a large enough fluctuation (not to be built through a long evolutionary history), the statistics can make “brief, fluctuation brains” dominate over observers produced by ordinary cosmic development.

How does the new paper try to address Boltzmann’s original assumptions using established physics?

Boltzmann’s early reasoning relied on an idealized picture where particles behave like a random gas that effectively “bumps” into each other. The new work argues that real particles don’t follow that simple behavior: Standard Model constituents cluster into bound states (quarks appear in groups such as protons and neutrons), and gravity promotes large-scale clumping. However, even after accounting for these effects, the authors conclude there is still no fully rigorous argument—based only on established physics—that dispels the Boltzmann brain hypothesis.

What is the alleged circularity behind the second law of thermodynamics?

The second law says entropy can’t decrease (it may stay constant, but typically rises). The usual reasoning is: entropy rises, so records of the past are trustworthy. But why trust the records? Because they align with the idea that entropy was lower in the past. That alignment is exactly what the second law is supposed to justify, creating a loop: low-entropy past → reliable past records → expectation of entropy increase → support for low-entropy past.

What is the “1,00 CE hypothesis,” and why is it presented as logically comparable to the big-bang assumption?

Instead of assuming entropy was minimal at the big bang, the hypothesis assumes entropy was minimal 1,000 years ago. Under that boundary condition, entropy would have been higher before that time, implying time effectively ran in reverse until the minimum, then switched direction. The paper argues this kind of alternative is logically as sound as the standard low-entropy-at-the-big-bang assumption, which undermines the idea that the second law’s arrow of time is uniquely justified by logic.

Review Questions

  1. How does the presence of spontaneous fluctuations in high-entropy states lead to the possibility of Boltzmann brains?
  2. What role do Standard Model clustering and gravity play in attempts to rule out Boltzmann brains?
  3. Why does the paper claim the justification for the second law becomes circular when past records are used to support entropy’s increase?

Key Points

  1. 1

    Boltzmann brains arise from the idea that, even at very high entropy, spontaneous fluctuations can temporarily form large, structured assemblies like brains.

  2. 2

    The Boltzmann brain scenario can make fluctuation-produced observers more probable than observers formed through ordinary cosmic history.

  3. 3

    A new analysis argues that incorporating Standard Model bound-state behavior and gravity does not yet yield a fully rigorous proof that Boltzmann brains are impossible.

  4. 4

    The second law’s usual justification is framed as circular: entropy is expected to rise because the past is assumed low-entropy, but that assumption is supported by records whose reliability depends on the same entropy direction.

  5. 5

    Alternative low-entropy boundary conditions—such as minimal entropy 1,000 years ago—are argued to be logically comparable to the big-bang low-entropy assumption, weakening claims of a uniquely determined time arrow.

Highlights

Even with realistic particle physics and gravity, there’s still no fully rigorous established-physics argument that rules out Boltzmann brains.
The second law’s arrow of time is challenged as a circular inference built on trusting records of a low-entropy past.
A proposed alternative boundary condition (“1,00 CE hypothesis”) suggests time-direction reversal could be logically on par with the standard big-bang story.

Topics

Mentioned