Get AI summaries of any video or article — Sign up free
Computer Color is Broken thumbnail

Computer Color is Broken

minutephysics·
5 min read

Based on minutephysics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Blurry dark boundaries between bright colors come from averaging pixel values in a non-linear brightness encoding rather than in linear physical light space.

Briefing

Blurring colorful images on computers often produces a dark, muddy boundary between bright adjacent colors—an artifact that doesn’t happen in real life. The root cause isn’t a mysterious graphics “glitch,” but a mismatch between how humans perceive brightness and how digital systems store and blend brightness values. Human vision responds to relative, roughly logarithmic changes in light, so doubling light feels like a much bigger perceptual jump in darkness than the same physical change in brightness near the highlights. Computers and sensors, by contrast, measure brightness in a linear, photon-counting way and then store values in a way that compresses the dynamic range for human-friendly viewing and efficient storage.

That compression is the key. Digital imaging typically stores brightness values such that a value of 0.5 doesn’t mean “half as bright” in physical photons. Instead, it’s closer to one-fifth the photon level of full white (and 0.25 is even lower—about one-twentieth of white). Cameras often achieve this by storing the square root of the measured brightness rather than the brightness itself, because human vision is more sensitive to small differences in dark regions and less sensitive in bright regions. When displaying the image, software “undoes” this by squaring the stored values back into physical brightness.

Problems start when software edits images—especially when it performs operations like blur or blends pixels with transparency. Blurring is essentially a local averaging: replace each pixel with the average of nearby pixels. But if the image values are still in the camera’s square-rooted (perceptual) space, then averaging those stored values is mathematically wrong for physical light blending. The average of two square roots is always less than the square root of the average, which systematically biases the result downward in brightness. That’s why a red-to-green boundary that should look like a clean halfway mix instead turns into a dark brownish sludge.

The fix is straightforward in principle: before averaging, convert the stored values back to linear brightness by squaring them; average in that linear space; then square-root the result again to return to the stored encoding. When this “linearize → blend → re-encode” workflow is followed, the transition between colors looks smooth and natural, matching real-world out-of-focus blending.

Despite the simplicity of the correction, most common software pipelines apply the lazy approach—averaging the square-rooted values directly. The result shows up across everyday tools, from iOS and Instagram to default settings in Adobe Photoshop. Advanced options can enable correct, physically based blending, but the central complaint is that accurate brightness handling shouldn’t require digging through settings. Beauty, in this case, is a matter of doing the math in the right space by default.

Cornell Notes

Computer blurring often creates dark, muddy boundaries between bright colors because software averages pixel values in the wrong brightness space. Human vision is roughly logarithmic in how it perceives brightness, so cameras store brightness using a non-linear encoding—commonly the square root of physical brightness. When editing tools blur or blend transparency by averaging the stored (square-rooted) values directly, the result is biased too dark because averaging square roots is not the same as averaging linear brightness. The correct workflow is to square values to linearize, average in linear space, then square-root again to return to the stored encoding. Using this “linearize → blend → re-encode” approach produces smooth, natural color transitions.

Why does a blur between bright colors look dark on computers even when the colors are vivid?

Because most software averages pixel values in the camera’s stored encoding rather than in linear physical brightness. Cameras typically store the square root of measured brightness to match human sensitivity and save data. Averaging those square-rooted values makes the blend too dark: the average of two square roots is always less than the square root of the average. That mathematical bias shows up as a brownish sludge at boundaries like red-to-green.

What does it mean that 0.5 isn’t “half as bright” in digital images?

In the common encoding, stored values are not linear in photons. A value of 1 corresponds to full brightness, but 0.5 can look halfway between black and white while representing far fewer photons—about one-fifth of white in physical brightness. Similarly, 0.25 corresponds to roughly one-twentieth of white’s photons. The numbers are tuned for perception, not direct physical light measurement.

How does camera storage relate to human perception?

Human vision detects relative brightness changes in a roughly logarithmic way, so the same physical light change feels different depending on whether the scene is dark or bright. Digital imaging exploits this by storing brightness in a non-linear way—often square-rooted—so dark regions get more effective gradation detail and bright regions get fewer, matching what humans can reliably distinguish.

What is the correct method to blur or blend colors without the dark boundary?

Undo the camera’s non-linear encoding before blending. Square each stored brightness value to convert back to linear physical brightness, average those linear values (for blur or transparency blending), then square-root the result to re-encode it in the original stored space. This preserves physically plausible light mixing and yields smooth transitions.

Why do common apps like Instagram or default Photoshop settings often get it wrong?

Most pipelines use the “lazy” approach: they average pixel values directly in the stored (square-rooted/perceptual) space. That shortcut ignores that the stored values aren’t linear in photons, so the blend is systematically too dark. Correct blending requires linearizing first, which many defaults don’t do unless advanced options are enabled.

Review Questions

  1. When blurring two adjacent colors, what mathematical mismatch occurs if you average square-root-encoded brightness values directly?
  2. Why does linearizing (squaring) before averaging produce a more natural red-to-green transition?
  3. How does the human visual system’s roughly logarithmic brightness perception influence camera encoding choices?

Key Points

  1. 1

    Blurry dark boundaries between bright colors come from averaging pixel values in a non-linear brightness encoding rather than in linear physical light space.

  2. 2

    Human brightness perception is roughly logarithmic, so digital imaging stores brightness values in a way that better matches what people can distinguish.

  3. 3

    A stored value like 0.5 is not physically “half the photons” of white; it corresponds to far fewer photons (about one-fifth).

  4. 4

    Cameras often store square roots of measured brightness; correct display requires squaring back to linear brightness.

  5. 5

    Correct blur and transparency blending requires squaring (linearize) → averaging in linear space → square-rooting (re-encode).

  6. 6

    Most everyday software uses direct averaging in stored space, producing systematically too-dark blends and muddy color transitions.

  7. 7

    Professional tools may offer correct blending options, but accurate brightness handling should be the default rather than a hidden setting.

Highlights

The dark sludge at color boundaries is a predictable math error: averaging square roots makes results too dark.
A stored brightness value of 0.5 can represent only about one-fifth the photons of white, because the encoding is tuned to perception.
The fix is simple: linearize by squaring, blend in linear space, then re-encode by square-rooting.
Most common apps and default Photoshop settings blur in the wrong space, so the artifact shows up widely.

Topics

  • Color Blending
  • Image Encoding
  • Perceptual Brightness
  • Gamma/Square-Root
  • Blur Artifacts