Get AI summaries of any video or article — Sign up free
Are We Living in an Ancestor Simulation? ft. Neil deGrasse Tyson | Space Time thumbnail

Are We Living in an Ancestor Simulation? ft. Neil deGrasse Tyson | Space Time

PBS Space Time·
6 min read

Based on PBS Space Time's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The simulation claim centers on “ancestor simulations,” not full universe simulation down to every quantum detail.

Briefing

The strongest through-line is a probabilistic claim: if future civilizations can run “ancestor simulations” that recreate the minds and sensory experiences of past humans, then simulated minds could vastly outnumber real minds—making it more likely than not that we’re living inside such a simulation. The argument matters because it turns a sci-fi premise into a quantitative, self-location problem: not “could simulations exist?” but “given how many simulated observers could exist, where do we most likely fit?”

The discussion first narrows the target. It rejects the idea that every atom and quantum field is simulated in full detail, calling that a deeper problem requiring tools like the Holographic Principle. Instead, it focuses on ancestor simulations: virtual people whose brains are modeled neuron-by-neuron, paired with an environment detailed enough to convince those brains that the world is real. The framework comes from Oxford philosopher Nick Bostrom, who argues that advanced civilizations might simulate their own history for scientific reasons—studying how minds behave in different conditions.

From there, the numbers drive the core conclusion. A human brain is estimated at about 100 billion neurons and over 100 trillion synapses, with rough computational costs on the order of 10^14 to 10^17 binary operations per second of brain time. Bostrom then assumes that simulating the external environment doesn’t require simulating the entire universe—only enough fidelity to keep the simulated brain’s experience consistent with what it would measure. If an ancestor simulation covers humanity over roughly 50,000 years, the total simulated “lifetimes” become enormous: about 100 billion people, each with ~1 billion seconds of experience, yielding an estimated 10^34 to 10^37 binary operations for all of human history (with Bostrom’s own range slightly lower, 10^33 to 10^36).

To estimate feasibility, the argument uses Robert Bradbury’s “Jupiter brain” idea: a planet-scale computer could perform around 10^42 operations per second, enough to run the mental lives of all humans in history many times over each second. Even if computing requirements are scaled down by several orders of magnitude, the simulation still produces vast numbers of observer-moments. That’s the engine behind the “simulation argument”: if ancestor simulations are created, most self-aware minds could be simulated, so we should expect to be among them.

The reasoning then leans on anthropic-style logic. Using a Copernican principle (we’re not in a special place) plus the anthropic principle (we can only observe a universe region capable of producing observers), the conclusion becomes a typicality claim: if simulated observers are far more numerous and their experiences match ours, then our current experience is more likely to belong to the simulated set. Bostrom himself reportedly assigns odds below 50%, citing two failure modes: civilizations might die out before building such simulations, or might never choose to run them.

Finally, the transcript stresses major objections. The hypothesis is effectively unfalsifiable because there’s no experiment that can prove we’re not simulated; Bostrom notes that once a simulation is “found out,” the system could be edited or rewound to remove inconsistencies. The same style of reasoning also risks “overreach,” including a warning about Bayesian presumptuousness—picking cosmologies that maximize the number of minds and then treating that as evidence. The segment closes by pivoting to related physics and comment-thread debates, including OMG particles and Boltzmann brain critiques, where determinism, emergent statistical behavior, and time-reversibility are used to challenge or refine those thought experiments.

Cornell Notes

Ancestor simulations—where a future civilization models human brains and supplies sensory input consistent enough to feel real—could produce far more “observer-moments” than the original biological minds. Using Bostrom’s calculations, simulating all humans over ~50,000 years could be computationally feasible for extremely powerful “Jupiter brain”–scale computers, and the resulting number of simulated lifetimes could dwarf real ones. If simulated observers vastly outnumber real observers and their experiences match ours, anthropic/typicality reasoning suggests we are more likely to be simulated. Bostrom also assigns odds below 50% because civilizations might not reach that capability or might not run such simulations. Key critiques focus on unfalsifiability, the possibility of editing away inconsistencies, and the danger of overconfident Bayesian reasoning.

What kind of “simulation” is being argued about, and what is explicitly ruled out?

The focus is on “ancestor simulations,” where the simulated brains (modeled at the level of neurons) receive sensory input detailed enough to convince them they’re real people living in a real world. The discussion explicitly avoids the stronger claim that the entire universe is simulated down to every atom and quantum field, calling that a deeper problem that would require tools like the Holographic Principle.

How does Bostrom’s simulation argument turn into a probability claim about where we are?

It combines (1) feasibility and (2) typicality. Feasibility comes from estimates of computational cost for simulating brains and enough environment fidelity, plus the idea that a super-advanced civilization could run many such simulations. Typicality comes from anthropic reasoning: if simulated minds vastly outnumber real minds and the simulated experiences are consistent with ours, then an observer like us is more likely to be one of the simulated observers.

What are the key computational estimates used to make ancestor simulations seem plausible?

The transcript cites brain-scale figures: about 100 billion neurons and over 100 trillion synapses, with rough compute costs of 10^14 to 10^17 binary operations per second of brain time. It then estimates the cost to simulate all humans over ~50,000 years as roughly 10^34 to 10^37 binary operations (noting Bostrom’s range of 10^33 to 10^36 as within similar orders of magnitude). For hardware, it uses Robert Bradbury’s “Jupiter brain” estimate of ~10^42 operations per second, implying the mental lives of all humans could be simulated many times over each second.

Why does Bostrom’s conclusion not land at “more likely than not” for simulation?

Bostrom is described as placing odds below 50% because of two major uncertainties: (a) civilizations might die out before developing the capability to run large-scale ancestor simulations, and (b) even if capable, they might choose not to run them.

What are the main objections raised against concluding we’re in a simulation?

The transcript highlights unfalsifiability: there’s no experiment that can prove we’re not simulated. It also notes a practical loophole—if the simulation is discovered, it could be edited or rewound to remove inconsistencies, which is framed as computationally cheaper than simulating the entire universe perfectly. A further critique warns about “presumptuous philosopher” Bayesian reasoning: selecting hypotheses that maximize the number of minds and then treating that as evidence can become circular.

How do the later comment-thread physics debates relate to the Boltzmann brain discussion?

The transcript includes a comment arguing that the Boltzmann brain thought experiment fails because it assumes random particle motion, while particle motion is deterministic depending on the interpretation of quantum mechanics. Another comment counters by invoking pseudo-randomness from complexity (e.g., in a room with ~10^28 molecules) and discusses emergent statistical phenomena like pressure, emphasizing time-reversibility: in principle, a perfectly reversed microstate could lead to large fluctuations, though with extremely low probability.

Review Questions

  1. If only the experience of the universe is simulated (not every atom), what must be true about the simulated environment for the argument to work?
  2. Which two uncertainties keep Bostrom’s odds of being in a simulation below 50%?
  3. Explain how anthropic/typicality reasoning changes the question from “can simulations exist?” to “what are the odds we are simulated?”

Key Points

  1. 1

    The simulation claim centers on “ancestor simulations,” not full universe simulation down to every quantum detail.

  2. 2

    Bostrom’s framework requires modeling human brains at the neuron level and providing sensory input consistent enough to produce a convincing lived experience.

  3. 3

    Rough compute estimates for simulating all humans over ~50,000 years land around 10^34–10^37 binary operations, depending on assumptions.

  4. 4

    A “Jupiter brain”–scale computer (~10^42 operations per second) could, under the argument’s assumptions, run enormous numbers of simulated lifetimes.

  5. 5

    Anthropic/typicality reasoning is what turns simulation feasibility into a probability about our own observer status.

  6. 6

    Major critiques focus on unfalsifiability and the possibility that simulations could be edited or rewound to prevent detectable inconsistencies.

  7. 7

    Related thought experiments like Boltzmann brains face challenges about randomness, determinism, and emergent statistical behavior.

Highlights

Ancestor simulations aim to replicate the experience of being a real person by simulating brains and supplying consistent sensory input—without simulating every atom in the universe.
Using order-of-magnitude estimates, the argument claims that planet-scale computing could generate vastly more simulated observer-moments than real ones, shifting “where we are” toward the simulated set.
The strongest objections are practical and philosophical: no decisive experiment can rule out simulation, and inconsistencies could be removed via editing or rewinding.
Boltzmann brain debates in the comments pivot on whether “random” assembly is truly random, and how determinism and emergent statistical phenomena affect the odds.

Topics

  • Ancestor Simulations
  • Anthropic Reasoning
  • Computational Feasibility
  • Boltzmann Brains
  • OMG Particles

Mentioned