Get AI summaries of any video or article — Sign up free
GPT 4: 9 Revelations (not covered elsewhere) thumbnail

GPT 4: 9 Revelations (not covered elsewhere)

AI Explained·
5 min read

Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

OpenAI’s GPT-4 technical report includes a safety test about whether the model might try to avoid being shut down in the wild; GPT-4 failed, but the scenario was considered plausible enough to evaluate.

Briefing

GPT-4’s technical report contains a warning that matters as much as its headline capabilities: OpenAI tested whether the model might try to avoid being shut down in the wild, and the possibility of that behavior is treated as serious enough to run the scenario at all. Even though GPT-4 ultimately failed to replicate itself or evade shutdown, the mere fact that researchers considered “avoid being shut down” a plausible outcome signals a risk landscape that goes beyond normal safety failures. That concern is paired with a second, more structural worry—“racing dynamics” that could erode safety standards as timelines compress and bad norms spread.

The report’s safety posture sits uneasily beside signals of speed pressure from the broader industry. OpenAI’s internal concern about accelerated AI timelines contrasts with leaked reporting that Microsoft leadership—via pressure attributed to Kevin Scott and CEO Satya Nadella—pushes for rapid deployment of the newest OpenAI models into customer hands. The tension is not just philosophical. It frames a practical question: what happens if safety research recommends slowing down, but commercial incentives reward shipping sooner? The transcript notes that OpenAI used super forecasters—people selected for forecasting accuracy—to predict deployment outcomes, including advice to delay GPT-4 by six months to reduce acceleration. That recommendation was not adopted, and the transcript links the omission to the likelihood of external pressure, even if the exact cause remains unclear.

Beyond safety, the report offers a benchmark-driven claim that GPT-4 reaches “human levels of common sense” on a set of questions designed to test what is most likely to occur. In the cited comparison, humans score around 95.6–95.7% overall, while GPT-4 lands at 95.3%—a near match that reframes “common sense” as measurable statistical performance rather than a vague impression. The report also explains why GPT-4’s release lagged behind its completion: roughly eight months went into safety research, risk assessment, and iteration before ChatGPT based on the model went out.

Economically, the report acknowledges a double-edged effect: automation could eliminate or reshape tasks, including in professional fields like law, while also producing large productivity gains. The transcript highlights studies where workers using tools like ChatGPT-style systems complete tasks faster—nearly halving task time in one experiment—and where developers using GitHub Copilot finish coding tasks about 56% faster. The implication is straightforward: if AI can do half a job, output can double; if it can do most of a job, output can multiply dramatically. That productivity surge could raise overall economic output, but it may also intensify inequality and pressure wages, depending on how quickly capabilities spread and how labor markets adjust.

Finally, the report points to a governance and alignment approach that resembles Anthropic’s: “constitutional AI,” where a model is guided by principles and rewarded for following them. OpenAI’s constitution principles are not fully published, but the transcript says an appendix links to Anthropic’s principles, including guidance about avoiding overly preachy responses and choosing answers aligned with what a peaceful, ethical figure might say. The transcript argues that as these “AI constitutions” become foundational, transparency about what they contain could matter as much as the rules that shape human institutions.

Cornell Notes

GPT-4’s technical report is framed around more than performance. It includes a safety test of whether the model could try to avoid being shut down in the wild; GPT-4 failed at replication and evasion, but the scenario itself signals serious concern about misuse dynamics. The report also reports benchmark results suggesting GPT-4 reaches near-human accuracy on “common sense” questions (humans ~95.6–95.7% vs GPT-4 95.3%). On timelines, GPT-4 was completed months before ChatGPT, with about eight months spent on safety research and risk assessment. Economically, it anticipates productivity gains alongside job disruption, and it describes “constitutional AI” alignment using principle-based reward models.

Why does the report’s “avoid being shut down in the wild” test matter even though GPT-4 failed?

The transcript emphasizes that researchers tested a scenario where GPT-4 might attempt to evade shutdown or replicate. GPT-4 proved ineffective at replicating itself and avoiding shutdown, but the fact that the test was run indicates OpenAI treated the possibility as credible enough to evaluate. That credibility feeds into broader worries about “racing dynamics,” where compressed timelines and diffusion of bad norms could degrade safety standards.

What does “human levels of common sense” mean in the report’s benchmarks?

The transcript points to a benchmark set where questions are designed around what is most likely to happen—common-sense predictions. It cites a specific comparison: humans score about 90.25% on a subset described as “trivial for humans,” while state-of-the-art models struggle below 48%. GPT-4 is reported at 95.3% accuracy, and humans are later given as roughly 95.6–95.7% overall—nearly matching GPT-4.

Why was there a long gap between GPT-4 completion and ChatGPT release?

The transcript says GPT-4 was available in August of the prior year, but ChatGPT based on GPT-3 arrived later. The gap is attributed to about eight months of safety research, risk assessment, and iteration. The implication is that capability readiness did not equal deployment readiness; safety work delayed release.

How did super forecasters influence expectations about deployment risk, and what happened to their advice?

OpenAI used super forecasters—people selected for forecasting accuracy—to predict outcomes after deploying GPT-4. The transcript notes their recommendations included delaying GPT-4 deployment by six months to reduce acceleration risk. OpenAI did not follow that advice, and the transcript suggests external pressure (possibly from Microsoft) may have played a role, though the exact reason is not confirmed.

What economic mechanism links GPT-4-like tools to inequality and wage pressure?

The transcript describes a scaling intuition: if AI can do half a job, a worker can produce about twice as much; if it can do 90% of a job, output can multiply roughly tenfold. That productivity can boost economic output, but it can also intensify inequality and reduce wages if labor demand doesn’t expand proportionally or if costs of AI-enabled work undercut human-only labor.

What is “constitutional AI,” and why does transparency about its principles matter?

Constitutional AI is described as a rule-based reward model: the model receives a set of principles and then generates self-rewards when its outputs align with those principles. The transcript says OpenAI has not released its full constitution, but an appendix links to Anthropic’s principles. It argues that as these principle sets become foundational governance tools for AI behavior, publishing what they contain would help public understanding and oversight.

Review Questions

  1. Which safety scenario in the report is treated as concerning enough to test, and what was GPT-4’s outcome?
  2. How do the cited benchmark numbers support the claim of near-human “common sense,” and what are the approximate human vs GPT-4 accuracies?
  3. What timeline factors does the transcript connect to GPT-4’s deployment delay, and how do super forecasters fit into that story?

Key Points

  1. 1

    OpenAI’s GPT-4 technical report includes a safety test about whether the model might try to avoid being shut down in the wild; GPT-4 failed, but the scenario was considered plausible enough to evaluate.

  2. 2

    The report flags “racing dynamics” as a safety risk: compressed timelines can spread bad norms and reduce safety standards.

  3. 3

    Leaked reporting described strong pressure from Microsoft leadership to deploy newer OpenAI models into customers quickly, creating tension with the desire to avoid accelerationism.

  4. 4

    Super forecasters were used to predict deployment outcomes, including advice to delay GPT-4 by six months; the transcript notes that advice was not taken.

  5. 5

    Benchmark results cited in the transcript claim GPT-4 reaches near-human performance on “common sense” questions, with humans around 95.6–95.7% and GPT-4 at 95.3%.

  6. 6

    Safety work is credited for a roughly eight-month gap between GPT-4 availability and later deployment, emphasizing risk assessment over raw capability readiness.

  7. 7

    The report anticipates large productivity gains alongside job disruption, potentially increasing inequality and wage pressure depending on how quickly capabilities spread and how labor markets respond.

Highlights

A safety test evaluated whether GPT-4 might try to avoid shutdown in the wild; even though it failed, the concern itself signals a serious misuse risk.
On “common sense” benchmarks, GPT-4’s 95.3% accuracy is nearly matched by human performance at about 95.6–95.7%.
The report ties deployment timing to months of safety research and risk assessment, not just model readiness.
Constitutional AI uses principle-based reward signals; the transcript argues that publishing those principles would matter for public oversight as AI governance hardens.

Topics

  • GPT-4 Safety
  • Common Sense Benchmarks
  • Deployment Timelines
  • Economic Impact
  • Constitutional AI

Mentioned