Get AI summaries of any video or article — Sign up free
The Most Common Cognitive Bias thumbnail

The Most Common Cognitive Bias

Veritasium·
4 min read

Based on Veritasium's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

People often form an early hypothesis and then generate only tests that keep that hypothesis plausible, even when the hypothesis is wrong.

Briefing

A simple three-number puzzle exposes a common cognitive bias: people latch onto an early guess and then keep generating answers that confirm it, even when the goal is to discover the real rule. Given the sequence 2, 4, 8, the challenge is to infer the hidden rule by proposing other three-number sets and receiving only “yes” or “no.” Many participants quickly settle on an obvious pattern—most often “multiply by 2”—and then keep testing variations that preserve that assumption. The feedback system does not reward their creativity; it rewards their willingness to be wrong.

As the game continues, “yes” responses appear for many sequences that look like they fit the multiplication idea—3, 6, 12; 5, 10, 20; 100, 200, 400—yet those are not the actual rule. Even when the rule is not “multiply by 2,” the early hypothesis keeps pulling attention toward familiar-looking answers. The turning point comes when the puzzle master deliberately offers sequences that should feel “wrong” under common assumptions—like 2, 4, 7—and watches whether participants can abandon their first model. Eventually, the correct rule emerges: the only requirement is that the numbers are in increasing order. That means many seemingly unrelated sets—1, 2, 3; 7, 8, 9; 8, 16, 39; 1, 7, 13—receive “yes,” while a set like 10, 9, 8 receives “no.” The participants realize they were “on the right track” only in the sense that they were generating plausible answers, not in the sense that they were identifying the true constraint.

The discussion then broadens from the puzzle to how people reason in real life. The creator ties the behavior to Nassim Taleb’s “The Black Swan,” using the metaphor that unknown or unexpected cases can break the comfort of a tidy theory. In the old belief that all swans were white, each white swan reinforced confidence—until black swans appeared. The same pattern shows up in everyday thinking: once people form a rule early, they tend to search for supporting examples and ignore disconfirming ones.

The practical takeaway is methodological rather than philosophical. Instead of asking only questions that are likely to yield “yes,” the scientific method aims to disprove. Setting out to find “no” is more informative than collecting confirmations, because failure to disprove a claim is what gradually increases confidence. Applied broadly, the advice is to treat any belief as something to stress-test: try hard to break it. Only then can people avoid self-deception and move closer to what is actually true about reality.

Cornell Notes

A three-number “yes/no” puzzle reveals a bias toward confirmation. After seeing 2, 4, 8, many people assume a familiar rule like “multiply by 2,” then keep proposing sequences that match that guess. The feedback eventually shows the real rule is simpler and less intuitive: any three numbers in strictly increasing order get “yes,” while decreasing order gets “no.” The lesson connects to Taleb’s “Black Swan” idea: people overfit early theories and miss surprising counterexamples. The cure is to actively seek disconfirming evidence, aligning with the scientific method’s goal of trying to disprove beliefs.

Why do many proposed rules (like “multiply by 2”) keep getting “yes” responses even when they’re wrong?

Because early hypotheses shape what people choose to test. If someone assumes the rule is “multiply by 2,” they’ll generate sequences such as 3, 6, 12; 5, 10, 20; or 100, 200, 400. Those sequences are increasing, so they still satisfy the actual hidden rule (increasing order), producing “yes” and reinforcing the mistaken belief.

What is the hidden rule that ultimately fits both the “yes” and “no” examples?

The rule is that the three numbers must be in increasing order. Sets like 1, 2, 3; 7, 8, 9; 8, 16, 39; and 1, 7, 13 all get “yes.” A set like 10, 9, 8 gets “no” because it is not increasing.

How does the puzzle illustrate the difference between getting information from “yes” versus “no”?

“Yes” results often confirm what people already expect, especially when their guesses drive their test cases. “No” results are more diagnostic because they directly contradict the assumed rule. The puzzle’s design pushes participants to learn from contradictions rather than accumulate confirmations.

How does the “Black Swan” metaphor connect to the puzzle’s reasoning pattern?

Taleb’s metaphor highlights how repeated observations can create a false sense of certainty—like believing all swans are white because only white swans were seen. The puzzle mirrors that: once a simple rule is guessed, people keep sampling examples that fit it, making it easy to overlook the disconfirming cases that would reveal the true rule.

What does the scientific method recommendation add to the puzzle’s lesson?

It reframes belief-testing: instead of trying to verify a theory by collecting “yes” cases, the scientific method tries to disprove it. Confidence grows when a claim survives attempts to break it, not when it keeps producing expected confirmations.

Review Questions

  1. What kinds of test sequences did participants initially generate, and how did that choice bias the feedback they received?
  2. How would you design a new three-number puzzle to maximize the chance of producing informative “no” answers?
  3. Why is increasing-order a “harder” rule to guess than multiplication, and what does that say about human pattern-seeking?

Key Points

  1. 1

    People often form an early hypothesis and then generate only tests that keep that hypothesis plausible, even when the hypothesis is wrong.

  2. 2

    In the puzzle, many “multiply by 2” examples still returned “yes” because they were also strictly increasing, masking the true rule.

  3. 3

    The most informative feedback tends to be “no,” because it directly contradicts an assumption rather than confirming it.

  4. 4

    Taleb’s “Black Swan” idea explains how repeated confirming observations can create overconfidence while disconfirming cases remain unseen.

  5. 5

    The scientific method’s strength is its focus on trying to disprove beliefs, not just collect supporting examples.

  6. 6

    A practical rule for everyday thinking: stress-test claims by actively searching for counterexamples.

Highlights

The hidden rule turned out to be simple: any three numbers in increasing order get “yes,” while decreasing order gets “no.”
Many participants kept proposing “multiply by 2” sequences and still got “yes,” showing how confirmation bias can be reinforced by the structure of what gets tested.
The puzzle reframes learning as a search for contradictions—“no” answers carry more information than “yes” answers.
Taleb’s “Black Swan” metaphor is used to argue that unknown counterexamples can overturn confident theories built from familiar cases.
The scientific method is presented as a discipline of attempted refutation: try to break a belief to get closer to truth.

Topics

Mentioned

  • Nassim Taleb