Get AI summaries of any video or article — Sign up free
But what are Hamming codes? The origin of error correction thumbnail

But what are Hamming codes? The origin of error correction

3Blue1Brown·
5 min read

Based on 3Blue1Brown's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Parity bits enforce even parity over selected groups, letting receivers detect that an error occurred but not where it happened.

Briefing

Scratches, noise, and transmission glitches can flip 1s and 0s—yet many storage and communication systems still recover the original data exactly. The core idea behind that resilience is to add carefully chosen redundancy so that a receiver can both detect an error and, for the common case of a single flipped bit, determine exactly which bit went wrong. Hamming codes are an early, mathematically efficient example: they use a small number of parity bits to turn “something is wrong” into “this specific position is wrong,” without needing to store full extra copies of the data.

The starting point is parity checking, a simple mechanism that detects odd numbers of bit flips. A sender sets one parity bit so that the total number of 1s in a group is even. If the receiver later finds odd parity, it knows at least one bit changed—though it cannot locate which one. That limitation is fundamental: parity alone can’t distinguish between “one error” and “three errors,” and even an even parity result could still hide two or more flips. The breakthrough comes from applying multiple parity checks, but not to the whole block. Instead, each parity bit covers a carefully selected subset of positions.

In the classic Hamming-code construction, positions are indexed within a block, and certain indices—powers of 2—are reserved for parity. Using a 16-bit layout as an illustrative example, four parity positions (1, 2, 4, and 8) are used to locate a single-bit error. Each parity bit checks the parity of a subset of the remaining positions; taken together, the pattern of which parity checks fail acts like a compact “address” for the error location. Conceptually, it’s like playing 20 questions: each parity check is a yes/no query that halves the remaining possibilities. With four parity checks, the receiver can pinpoint one of the 15 non-reserved positions; the only ambiguous outcome is the “no parity checks fail” case, which is resolved by excluding position 0 from the correction scheme.

That yields the familiar 15-11 Hamming code: 11 data bits plus 4 redundancy bits. The redundancy bits are not simple copies; they’re deterministic functions of the data that create a map from parity outcomes to bit positions. If exactly one bit flips, the receiver can identify the faulty position and correct it. If two bits flip, the receiver can detect that something went wrong but generally cannot correct it.

To detect double-bit errors as well, the construction can be extended by reintroducing position 0 as an overall parity bit across the entire block. With this “extended Hamming code,” a single-bit error makes the overall parity odd, while two-bit errors keep the overall parity even—but still disturb at least one of the four correction-related parity checks. The result: single-bit errors are correctable, and two-bit errors become detectable.

The transcript walks through a full worked example of encoding an 11-bit message into a 16-bit block by setting parity bits so that each checked subset has even parity, then demonstrates decoding by recomputing parity outcomes after flipping one or two bits. It ends by hinting at a more elegant, scalable implementation—compressing the multi-parity logic into a systematic computation suitable for code—setting up a follow-up that turns the hand-calculation method into an algorithm.

Cornell Notes

Hamming codes add a small number of parity bits to data so that a receiver can correct any single-bit error and often detect multi-bit errors. The method starts with parity checking, which can detect that “something changed” but not where. By running several parity checks on carefully chosen subsets of positions, the pattern of which checks fail encodes the error’s location—like a yes/no “address” for the flipped bit. A standard 15-11 Hamming code uses 4 parity bits to correct one error among 15 positions, while an extended version adds an overall parity bit to detect two-bit errors. This matters because it turns noisy reads (like scratched disks) into reliable recovery without storing full redundant copies.

Why does a single parity bit only detect errors but not locate them?

A parity bit enforces that the total number of 1s in a group is even. If a bit flips, the number of 1s changes parity (even ↔ odd), so the receiver can tell an error occurred. But the parity result doesn’t identify which position changed; many different single-bit flips produce the same parity outcome. Also, odd parity could come from 1, 3, 5, etc. flips, while even parity could still hide 2, 4, 6, etc. flips.

How do multiple parity checks let a receiver pinpoint a single flipped bit?

Each parity bit checks a different subset of positions. For a 16-bit example, parity bits sit at indices 1, 2, 4, and 8. If the receiver recomputes each subset’s parity, the set of checks that come out wrong narrows the error to a specific column and row in the subset structure—equivalently, to a specific bit index. The yes/no outcomes across the parity checks act like binary queries that identify one location among the non-reserved positions.

What is the “15-11” structure, and why does it exclude position 0 from correction?

With four parity checks, there are 16 possible outcome patterns, but only 15 bit positions to correct (excluding the parity-bit positions). The extra pattern corresponds to “no parity checks fail,” which could mean either “no error” or “error at position 0.” To remove that ambiguity, the correction scheme uses a 15-bit block where 11 bits carry data and 4 positions are redundancy, yielding a 15-11 Hamming code.

How does the extended Hamming code detect two-bit errors?

It adds an overall parity bit (position 0) across the entire block. A single-bit error flips the overall parity to odd, and the subset parity checks also indicate an error location. Two-bit errors flip two bits, so the overall parity toggles twice and returns to even; however, at least one of the subset parity checks will still fail. So overall-even plus nonzero subset failures signals “two errors detected,” even though correction isn’t possible.

In the worked decoding example, how does the parity pattern determine the exact bit to flip back?

After the receiver checks the parity of each of the four special subsets, each check being even or odd narrows the candidate location. In the example, the parity outcomes correspond to an error in the bottom portion, ultimately identifying position 10. The overall parity being odd provides confidence it was a single-bit flip rather than two or more. After correcting position 10, the 11 data bits extracted from the non-parity positions match the original message.

Review Questions

  1. What information does parity provide, and what ambiguity remains after a single parity check?
  2. How do the four parity checks in a Hamming code combine to identify a specific bit position for a single-bit error?
  3. Why does adding an overall parity bit enable detection (but not correction) of two-bit errors in an extended Hamming code?

Key Points

  1. 1

    Parity bits enforce even parity over selected groups, letting receivers detect that an error occurred but not where it happened.

  2. 2

    Hamming codes locate single-bit errors by running multiple parity checks over carefully chosen subsets of positions.

  3. 3

    The pattern of which parity checks fail functions like a compact binary address for the error location.

  4. 4

    A standard 15-11 Hamming code uses 4 redundancy bits to correct any single-bit error among 15 positions, avoiding ambiguity with the “no failures” outcome.

  5. 5

    An extended Hamming code adds an overall parity bit so that two-bit errors can be detected: overall parity stays even while subset checks still show inconsistencies.

  6. 6

    Encoding and decoding can be done by recomputing parity conditions and mapping the resulting yes/no pattern to a specific bit index.

Highlights

Scratched disks still play because error correction can recover the original bit pattern even when individual bits flip during reading.
Parity checking detects errors by tracking even vs odd counts of 1s, but it cannot locate the flipped bit.
Hamming codes turn several parity checks into a yes/no “addressing” system that pinpoints a single-bit error position.
Extended Hamming codes add one more parity constraint to detect two-bit errors by separating “overall parity” from “subset parity” outcomes.

Topics

  • Error Correction Codes
  • Parity Checks
  • Hamming Codes
  • Extended Hamming Codes
  • Single-Bit Correction

Mentioned