Why People Are So Confident When They're Wrong
Based on Veritasium's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Confidence and accuracy often diverge; high stated certainty can correspond to much lower correctness, indicating poor calibration.
Briefing
Overconfidence isn’t just a personality flaw—it’s a predictable mismatch between how certain people feel and how often they’re actually right, and it becomes dangerous when feedback is noisy or delayed. Experiments on calibration repeatedly find that high confidence tracks poorly with accuracy: when people claim around 90% certainty, they’re correct only about 75% of the time, and in a Veritasium community replication, respondents who said they were 91–100% sure answered correctly only 51% of the time. The gap persists across domains, from general knowledge to motor skills, and even shows up among experts and professional forecasters, who tend to be too sure about outcomes like inflation.
That calibration problem turns into a catastrophe when people can’t reliably “check” whether they’re wrong. The transcript illustrates this with Nick Leeson and Barings Bank, where a junior trader’s costly futures mistake was concealed in an obscure error account (88888). Leeson’s confidence grew as the concealment went unnoticed, and he doubled down by betting that the Japanese market would rebound—specifically by going long on the Nikkei 225. The market kept falling, but the strategy relied on the idea that a future win would erase past losses. Instead, losses ballooned from roughly $40,000 to around $3 million, then to tens of millions as Leeson kept shifting losses between accounts to preserve the appearance of recovery.
The pattern worsened because the environment provided misleading feedback. In a noisy market, bad decisions can be temporarily masked by later wins, making it hard to tell whether confidence is earned. Leeson’s apparent success also fed institutional overconfidence: Barings management kept granting requests for more capital, even as the account’s losses grew to hundreds of millions and the bank’s capital base was stretched beyond what should have been feasible. When the Great Hanshin Earthquake hit Japan in January 1995, the Nikkei plunged and Leeson’s riskier bets failed, culminating in losses estimated at about $2.8 billion, a run, and Barings’ collapse.
The transcript then connects these real-world failures to cognitive and psychological mechanisms. Overconfidence can stem from motivated self-image—wanting to feel informed—or from “stupidity” in the sense of not knowing enough to evaluate one’s own performance. The classic Dunning–Kruger curve is discussed as a meme that gets simplified online, while the underlying research shows that people who perform worse tend to show the largest confidence-performance mismatch. Another driver is cognitive load: judging accuracy is mentally taxing, and studies linking short-term memory capacity to confidence suggest that when working memory is strained, confidence becomes less accurate and ranges become too narrow.
Finally, the transcript argues that overconfidence can be socially rewarded. Confident advice activates reward-related brain activity, and audiences—whether in interviews, politics, or markets—often place more faith in certainty even when it’s unwarranted. In complex systems with delayed, incomplete, or contradictory information, that incentive structure can amplify error.
As for fixes, the transcript points to practical calibration habits: track predictions and outcomes (“keep track and keep score”), seek feedback, and listen to people who disagree. The most reliable path to accuracy isn’t maximal certainty, but knowing the limits of one’s knowledge—because in environments where feedback is unreliable, confidence without calibration can become a costly gamble.
Cornell Notes
The core finding is that people’s confidence often runs ahead of their accuracy, creating a calibration gap that can be amplified by noisy feedback. Experiments show that when people report high certainty (e.g., around 90%), they are correct substantially less often (about 75% in one study), and a Veritasium community test found extreme overconfidence among those claiming 91–100% certainty. Overconfidence appears across knowledge and skills, including among experts and professional forecasters, and it can be worsened by cognitive limits and information overload. Real-world consequences follow when decision-makers can’t reliably verify whether they’re wrong—such as in financial markets or political forecasting—making confidence a dangerous substitute for evidence.
What does “calibration” mean, and why does it matter for decision-making?
How did Nick Leeson’s overconfidence interact with Barings Bank’s institutional decisions?
Why isn’t overconfidence always explained by arrogance or wanting to feel good?
What’s the relationship between the Dunning–Kruger effect and the “Mount Stupid Curve” meme?
How can noisy environments amplify overconfidence?
What practical steps are suggested to reduce overconfidence?
Review Questions
- What evidence in the transcript supports the claim that confidence often overstates accuracy, and how large is the typical mismatch?
- Explain how noisy feedback can turn temporary success into long-term overconfidence, using the Nikkei 225 and account-88888 story as an example.
- Which factors—cognitive load, information shortcuts, social incentives, or motivated self-image—does the transcript treat as contributors to overconfidence, and how do they differ?
Key Points
- 1
Confidence and accuracy often diverge; high stated certainty can correspond to much lower correctness, indicating poor calibration.
- 2
Overconfidence becomes especially dangerous when feedback is delayed, incomplete, or noisy, making it hard to detect mistakes.
- 3
Leeson’s doubling-down strategy relied on the expectation that future wins would erase past losses, but market noise and misleading interim results amplified error.
- 4
Institutional overconfidence can mirror individual overconfidence: Barings management repeatedly funded Leeson despite doubts, reinforcing the illusion of competence.
- 5
Cognitive limits matter: judging one’s own accuracy is mentally taxing, and higher memory load can worsen confidence accuracy.
- 6
Overconfidence can be socially rewarded because confident messaging activates reward-related brain responses and earns trust even when certainty is unwarranted.
- 7
Reducing overconfidence requires calibration practices—track predictions, seek feedback, and actively listen to credible disagreement.