Get AI summaries of any video or article — Sign up free
Why People Are So Confident When They're Wrong thumbnail

Why People Are So Confident When They're Wrong

Veritasium·
6 min read

Based on Veritasium's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Confidence and accuracy often diverge; high stated certainty can correspond to much lower correctness, indicating poor calibration.

Briefing

Overconfidence isn’t just a personality flaw—it’s a predictable mismatch between how certain people feel and how often they’re actually right, and it becomes dangerous when feedback is noisy or delayed. Experiments on calibration repeatedly find that high confidence tracks poorly with accuracy: when people claim around 90% certainty, they’re correct only about 75% of the time, and in a Veritasium community replication, respondents who said they were 91–100% sure answered correctly only 51% of the time. The gap persists across domains, from general knowledge to motor skills, and even shows up among experts and professional forecasters, who tend to be too sure about outcomes like inflation.

That calibration problem turns into a catastrophe when people can’t reliably “check” whether they’re wrong. The transcript illustrates this with Nick Leeson and Barings Bank, where a junior trader’s costly futures mistake was concealed in an obscure error account (88888). Leeson’s confidence grew as the concealment went unnoticed, and he doubled down by betting that the Japanese market would rebound—specifically by going long on the Nikkei 225. The market kept falling, but the strategy relied on the idea that a future win would erase past losses. Instead, losses ballooned from roughly $40,000 to around $3 million, then to tens of millions as Leeson kept shifting losses between accounts to preserve the appearance of recovery.

The pattern worsened because the environment provided misleading feedback. In a noisy market, bad decisions can be temporarily masked by later wins, making it hard to tell whether confidence is earned. Leeson’s apparent success also fed institutional overconfidence: Barings management kept granting requests for more capital, even as the account’s losses grew to hundreds of millions and the bank’s capital base was stretched beyond what should have been feasible. When the Great Hanshin Earthquake hit Japan in January 1995, the Nikkei plunged and Leeson’s riskier bets failed, culminating in losses estimated at about $2.8 billion, a run, and Barings’ collapse.

The transcript then connects these real-world failures to cognitive and psychological mechanisms. Overconfidence can stem from motivated self-image—wanting to feel informed—or from “stupidity” in the sense of not knowing enough to evaluate one’s own performance. The classic Dunning–Kruger curve is discussed as a meme that gets simplified online, while the underlying research shows that people who perform worse tend to show the largest confidence-performance mismatch. Another driver is cognitive load: judging accuracy is mentally taxing, and studies linking short-term memory capacity to confidence suggest that when working memory is strained, confidence becomes less accurate and ranges become too narrow.

Finally, the transcript argues that overconfidence can be socially rewarded. Confident advice activates reward-related brain activity, and audiences—whether in interviews, politics, or markets—often place more faith in certainty even when it’s unwarranted. In complex systems with delayed, incomplete, or contradictory information, that incentive structure can amplify error.

As for fixes, the transcript points to practical calibration habits: track predictions and outcomes (“keep track and keep score”), seek feedback, and listen to people who disagree. The most reliable path to accuracy isn’t maximal certainty, but knowing the limits of one’s knowledge—because in environments where feedback is unreliable, confidence without calibration can become a costly gamble.

Cornell Notes

The core finding is that people’s confidence often runs ahead of their accuracy, creating a calibration gap that can be amplified by noisy feedback. Experiments show that when people report high certainty (e.g., around 90%), they are correct substantially less often (about 75% in one study), and a Veritasium community test found extreme overconfidence among those claiming 91–100% certainty. Overconfidence appears across knowledge and skills, including among experts and professional forecasters, and it can be worsened by cognitive limits and information overload. Real-world consequences follow when decision-makers can’t reliably verify whether they’re wrong—such as in financial markets or political forecasting—making confidence a dangerous substitute for evidence.

What does “calibration” mean, and why does it matter for decision-making?

Calibration is the match between stated confidence and actual correctness. If someone is perfectly calibrated and says they’re 80% sure, they should be right about 80% of the time. The transcript highlights repeated evidence that most people are not well calibrated: one study found that 90% confidence corresponds to only ~75% accuracy, and a Veritasium community replication found that people claiming 91–100% certainty were correct only 51% of the time. In high-stakes settings, this mismatch turns “confidence” into a misleading proxy for truth.

How did Nick Leeson’s overconfidence interact with Barings Bank’s institutional decisions?

Leeson hid losses in an obscure error account (88888), and the lack of immediate detection boosted his confidence that he could recover. He then doubled down on the Nikkei 225 by going long on the top 225 Japanese companies, betting on a rebound after the market had already been falling from about 38,000 to 16,000. Temporary wins and the market’s noise made it hard to judge whether decisions were improving or just coincidentally profitable. Barings management reinforced the illusion by repeatedly approving large requests for additional capital, dismissing doubts from other traders, until losses reached hundreds of millions and the bank collapsed after the Great Hanshin Earthquake.

Why isn’t overconfidence always explained by arrogance or wanting to feel good?

The transcript offers multiple mechanisms. One is motivated egocentrism: people like to believe they know and may pretend certainty when they don’t. Another is cognitive limitation: assessing accuracy is mentally taxing, and studies link short-term memory capacity to overconfidence—people with worse memory were more often wrong and more often overconfident, and increasing memory load made confidence estimates less accurate. A third is information processing shortcuts, where brains substitute easier related questions for harder ones, producing systematic errors.

What’s the relationship between the Dunning–Kruger effect and the “Mount Stupid Curve” meme?

The transcript distinguishes the meme from the original research. The “Mount Stupid Curve” is described as a simplified graphic that became conflated with the Dunning–Kruger effect. In the original studies, Dunning and Kruger tested tasks like grammar, logic, and humor, then asked participants to estimate how well they performed. Those who did worse showed the largest confidence-performance mismatch (overconfidence), while the best performers were slightly underconfident. This supports the idea that overconfidence can correlate with how much people know, but the transcript also notes that confidence may be constrained by how people express it (often clustering around middle-of-the-road values).

How can noisy environments amplify overconfidence?

In controlled settings with reliable feedback—like chess—people can learn from clear outcomes and calibrate better. In noisy environments, consequences are inconsistent or delayed, so it’s harder to tell whether a decision was truly good. For Leeson, some bad trades were followed by wins that temporarily masked errors, clouding judgment. For political forecasting, the transcript notes that disinformation and uncertainty can make it difficult to discern key signals, reducing the reliability of feedback and increasing the risk of confident misprediction.

What practical steps are suggested to reduce overconfidence?

The transcript emphasizes calibration through tracking and feedback: keep a record of predictions and outcomes (“keep track and keep score”), and avoid promising certainty without estimating probabilities. It also recommends intellectual humility and using feedback rather than just information. Finally, it suggests leveraging the wisdom of crowds by listening to people who disagree, especially those who can surface information or arguments the confident person lacks.

Review Questions

  1. What evidence in the transcript supports the claim that confidence often overstates accuracy, and how large is the typical mismatch?
  2. Explain how noisy feedback can turn temporary success into long-term overconfidence, using the Nikkei 225 and account-88888 story as an example.
  3. Which factors—cognitive load, information shortcuts, social incentives, or motivated self-image—does the transcript treat as contributors to overconfidence, and how do they differ?

Key Points

  1. 1

    Confidence and accuracy often diverge; high stated certainty can correspond to much lower correctness, indicating poor calibration.

  2. 2

    Overconfidence becomes especially dangerous when feedback is delayed, incomplete, or noisy, making it hard to detect mistakes.

  3. 3

    Leeson’s doubling-down strategy relied on the expectation that future wins would erase past losses, but market noise and misleading interim results amplified error.

  4. 4

    Institutional overconfidence can mirror individual overconfidence: Barings management repeatedly funded Leeson despite doubts, reinforcing the illusion of competence.

  5. 5

    Cognitive limits matter: judging one’s own accuracy is mentally taxing, and higher memory load can worsen confidence accuracy.

  6. 6

    Overconfidence can be socially rewarded because confident messaging activates reward-related brain responses and earns trust even when certainty is unwarranted.

  7. 7

    Reducing overconfidence requires calibration practices—track predictions, seek feedback, and actively listen to credible disagreement.

Highlights

People who claim 91–100% certainty in science questions were correct only 51% of the time in a Veritasium community test, underscoring a large calibration gap.
Leeson’s hidden losses in account 88888 and his belief in “getting it back” show how overconfidence plus delayed detection can snowball into catastrophic risk.
The transcript argues that noisy environments make feedback unreliable, so temporary wins can mask bad decisions and inflate confidence.
The “Mount Stupid Curve” is described as a meme that doesn’t perfectly match the original Dunning–Kruger research, which found the biggest confidence-performance mismatch among worse performers.
A key takeaway is practical: calibration improves when people keep score, use feedback, and listen to those who disagree.

Topics

Mentioned

  • Nick Leeson
  • Derek
  • Franz Reichelt
  • Allan Lichtman
  • Daniel Kahneman
  • Hansson
  • Juslin
  • Windmann
  • Conte
  • O-rings
  • NASA
  • fMRI
  • TNT