Computer Color is Broken
Based on minutephysics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Blurry dark boundaries between bright colors come from averaging pixel values in a non-linear brightness encoding rather than in linear physical light space.
Briefing
Blurring colorful images on computers often produces a dark, muddy boundary between bright adjacent colors—an artifact that doesn’t happen in real life. The root cause isn’t a mysterious graphics “glitch,” but a mismatch between how humans perceive brightness and how digital systems store and blend brightness values. Human vision responds to relative, roughly logarithmic changes in light, so doubling light feels like a much bigger perceptual jump in darkness than the same physical change in brightness near the highlights. Computers and sensors, by contrast, measure brightness in a linear, photon-counting way and then store values in a way that compresses the dynamic range for human-friendly viewing and efficient storage.
That compression is the key. Digital imaging typically stores brightness values such that a value of 0.5 doesn’t mean “half as bright” in physical photons. Instead, it’s closer to one-fifth the photon level of full white (and 0.25 is even lower—about one-twentieth of white). Cameras often achieve this by storing the square root of the measured brightness rather than the brightness itself, because human vision is more sensitive to small differences in dark regions and less sensitive in bright regions. When displaying the image, software “undoes” this by squaring the stored values back into physical brightness.
Problems start when software edits images—especially when it performs operations like blur or blends pixels with transparency. Blurring is essentially a local averaging: replace each pixel with the average of nearby pixels. But if the image values are still in the camera’s square-rooted (perceptual) space, then averaging those stored values is mathematically wrong for physical light blending. The average of two square roots is always less than the square root of the average, which systematically biases the result downward in brightness. That’s why a red-to-green boundary that should look like a clean halfway mix instead turns into a dark brownish sludge.
The fix is straightforward in principle: before averaging, convert the stored values back to linear brightness by squaring them; average in that linear space; then square-root the result again to return to the stored encoding. When this “linearize → blend → re-encode” workflow is followed, the transition between colors looks smooth and natural, matching real-world out-of-focus blending.
Despite the simplicity of the correction, most common software pipelines apply the lazy approach—averaging the square-rooted values directly. The result shows up across everyday tools, from iOS and Instagram to default settings in Adobe Photoshop. Advanced options can enable correct, physically based blending, but the central complaint is that accurate brightness handling shouldn’t require digging through settings. Beauty, in this case, is a matter of doing the math in the right space by default.
Cornell Notes
Computer blurring often creates dark, muddy boundaries between bright colors because software averages pixel values in the wrong brightness space. Human vision is roughly logarithmic in how it perceives brightness, so cameras store brightness using a non-linear encoding—commonly the square root of physical brightness. When editing tools blur or blend transparency by averaging the stored (square-rooted) values directly, the result is biased too dark because averaging square roots is not the same as averaging linear brightness. The correct workflow is to square values to linearize, average in linear space, then square-root again to return to the stored encoding. Using this “linearize → blend → re-encode” approach produces smooth, natural color transitions.
Why does a blur between bright colors look dark on computers even when the colors are vivid?
What does it mean that 0.5 isn’t “half as bright” in digital images?
How does camera storage relate to human perception?
What is the correct method to blur or blend colors without the dark boundary?
Why do common apps like Instagram or default Photoshop settings often get it wrong?
Review Questions
- When blurring two adjacent colors, what mathematical mismatch occurs if you average square-root-encoded brightness values directly?
- Why does linearizing (squaring) before averaging produce a more natural red-to-green transition?
- How does the human visual system’s roughly logarithmic brightness perception influence camera encoding choices?
Key Points
- 1
Blurry dark boundaries between bright colors come from averaging pixel values in a non-linear brightness encoding rather than in linear physical light space.
- 2
Human brightness perception is roughly logarithmic, so digital imaging stores brightness values in a way that better matches what people can distinguish.
- 3
A stored value like 0.5 is not physically “half the photons” of white; it corresponds to far fewer photons (about one-fifth).
- 4
Cameras often store square roots of measured brightness; correct display requires squaring back to linear brightness.
- 5
Correct blur and transparency blending requires squaring (linearize) → averaging in linear space → square-rooting (re-encode).
- 6
Most everyday software uses direct averaging in stored space, producing systematically too-dark blends and muddy color transitions.
- 7
Professional tools may offer correct blending options, but accurate brightness handling should be the default rather than a hidden setting.