Get AI summaries of any video or article — Sign up free
So I Tried To Learn Shaders... thumbnail

So I Tried To Learn Shaders...

The PrimeTime·
5 min read

Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Shaders compute per-pixel colors in parallel, so effects are expressed as math over coordinates and uniforms rather than sequential state updates.

Briefing

Shaders become understandable once they’re treated as massively parallel “pixel programs”: a fragment shader runs for every pixel on the screen, using the pixel’s screen coordinates to compute a color. The core learning arc here is moving from confusion about how pixels get their inputs (and how “blur” works) to a working mental model of GPU execution: thousands to millions of tiny tasks run in parallel, each one largely independent, which is why shader code is written as functions of position and time rather than step-by-step logic.

The transcript starts with a personal history of failing to grasp shader math from a book example years earlier—light swinging to create shadows and penumbras. That earlier failure becomes the motivation for a gentler rebuild: what a shader is, how it differs from CPU-style sequential programs, and why GPUs are built for parallel throughput. A CPU metaphor of “pipes” and “threads” leads into the practical reality: every frame requires computing color for hundreds of thousands of pixels (e.g., 800×600 is 480,000 pixels per frame), and high resolutions multiply that workload dramatically. This is the reason shaders feel “fast” when they’re written in the right style: the GPU architecture is designed to keep many small operations busy at once.

From there, the learning shifts into concrete WebGL/GLSL mechanics. The transcript introduces GLSL basics (vertex vs. fragment shaders), the idea of built-in variables like GL frag color and gl_FragCoord, and the importance of precision qualifiers (e.g., medium float). It also emphasizes that shader debugging is awkward: there’s no normal console logging, and mistakes often show up as a white screen or compilation errors. The practical workaround is iterative experimentation—changing values, forcing strong colors, and using shader logs when available.

A major milestone is getting a “hello world” style output: cornflower blue via a minimal WebGL pipeline (create context, compile shaders, link a program, draw). Then comes the real conceptual payoff: uniforms. Uniforms are read-only inputs set by the host (JavaScript) that remain constant across all fragment invocations during a draw call. The transcript uses uniforms like u_time, u_resolution, and u_mouse to drive animation and interaction, explaining how the same shader code can produce different results depending on those global inputs.

The transcript then dives into coordinate spaces and why results look “wrong” until the right space is used. It distinguishes vertex coordinates, fragment coordinates, and normalized device coordinates, and repeatedly corrects assumptions about where (0,0) lies and what range gl_FragCoord uses. Once coordinates are correct, the shader becomes a playground for math-driven visuals: distance fields (distance from the mouse), gradients, and time-based sine waves.

Finally, the transcript explores shaping functions—especially smoothstep—as a way to draw crisp lines and controlled transitions without branching. The “fence” exercise demonstrates how smoothstep creates an S-curve transition around a threshold, producing a thin green line where a condition like y≈x holds, while the rest of the screen stays mostly unchanged. The session ends with the broader takeaway: shaders are less about “debugging a program” and more about mastering math-to-image transformations, using parallelism-friendly functions of position, time, and uniforms to build increasingly complex visuals.

Cornell Notes

The transcript builds a working understanding of shaders by treating them as parallel programs that compute a color for every pixel. It contrasts CPU-style sequential thinking with GPU execution, where each fragment shader invocation is independent and driven by inputs like position, time, and resolution. The session focuses on WebGL/GLSL fundamentals: vertex vs. fragment shaders, built-in variables (like gl_FragCoord), and especially uniforms, which are read-only values set by JavaScript and constant across a draw call. It also highlights shader debugging realities (white screens, compilation errors, limited logging) and uses iterative experiments to learn. Finally, it demonstrates how smoothstep and other shaping functions turn simple math (distance, thresholds, sine waves) into crisp lines and smooth transitions.

Why does shader code often look like “a function of position” rather than step-by-step logic?

Because fragment shaders run once per pixel, in parallel. Each invocation computes a color using inputs available to that pixel (e.g., gl_FragCoord) plus global read-only uniforms (e.g., u_time, u_resolution). GPU threads can’t reliably communicate with each other during the draw, so the computation must be independent per pixel. That’s why effects like blur, gradients, and distance-based shading are expressed as math over coordinates rather than sequential state updates.

What are uniforms in GLSL, and why are they crucial for interaction and animation?

Uniforms are global variables set by the host application (JavaScript in WebGL) and read by both vertex and fragment shaders. Their key property is constancy during a draw call: every fragment invocation sees the same uniform values. In the transcript, u_time drives animation, u_resolution enables coordinate normalization, and u_mouse provides interaction. This lets the same shader program produce different frames without rewriting shader code.

How do coordinate spaces cause “it looks wrong” moments in shader work?

The transcript repeatedly corrects assumptions about where (0,0) is and what range fragment coordinates occupy. gl_FragCoord provides window-space/pixel coordinates for the current fragment, while other spaces (like normalized device coordinates) use different ranges (often -1 to 1). If distance or gradients are computed in the wrong space, the output can appear flipped, stretched, or shifted. The fix is to normalize using u_resolution and use the correct coordinate interpretation.

What does smoothstep do, and why does it create a thin line instead of coloring the whole screen?

smoothstep(edge0, edge1, x) performs a smooth Hermite interpolation from 0 to 1 as x moves between edge0 and edge1. In the “fence” example, the shader computes how close the current fragment is to a line condition (like y≈x). Only fragments whose distance falls within a narrow threshold range (e.g., around 0.2) get values near 1; most fragments are outside the threshold and remain near 0. That’s why the green line appears only where the condition is met, while the rest of the screen stays mostly black/gray.

Why is shader debugging harder than typical CPU debugging?

The transcript emphasizes that GLSL/WebGL debugging lacks normal tools like console logging. Errors often show up as compilation failures or a white screen. Even when it runs, visual output is the primary feedback mechanism. The practical approach is iterative experimentation: change one variable at a time, force strong colors, and interpret errors like “no frag output” or vertex attribute mismatches to correct the pipeline.

How do time and trig functions (sin/cos) become animation tools in shaders?

The transcript uses u_time with sine waves to animate patterns. Since sin/cos output smoothly vary between -1 and 1, they can modulate brightness, positions, or thresholds over time. By scaling time (e.g., multiplying by π or dividing by constants), the animation speed and frequency can be controlled. Combining these with shaping functions like smoothstep yields crisp, controllable motion rather than raw oscillations.

Review Questions

  1. When would you use a uniform instead of a varying, and what does each imply about whether values can differ per fragment?
  2. Explain how smoothstep(edge0, edge1, x) determines whether a fragment is “on” or “off” in the fence/line example.
  3. If gl_FragCoord is in pixel/window coordinates, what steps are needed to compute a distance to the mouse in a normalized coordinate system?

Key Points

  1. 1

    Shaders compute per-pixel colors in parallel, so effects are expressed as math over coordinates and uniforms rather than sequential state updates.

  2. 2

    GPUs are optimized for many independent threads; fragment shader invocations generally can’t depend on each other during a draw call.

  3. 3

    WebGL/GLSL learning is easiest by building a minimal pipeline first (compile/link shaders, draw), then iteratively adding uniforms and math.

  4. 4

    Uniforms (e.g., u_time, u_resolution, u_mouse) are read-only values set by the host and constant across all fragments in a draw call.

  5. 5

    Correct coordinate-space handling is essential: gl_FragCoord is window/pixel space, so normalization with u_resolution is often required for consistent gradients and distance fields.

  6. 6

    Shader debugging is mostly visual and compile-error driven; forcing strong colors and changing one variable at a time is a practical strategy.

  7. 7

    Shaping functions like smoothstep turn threshold-like conditions into smooth, controllable transitions, enabling thin lines and crisp bands without branching.

Highlights

A shader’s “unit of work” is the fragment: each pixel runs the fragment shader independently, producing a color from inputs like gl_FragCoord plus uniforms.
Uniforms are the bridge between CPU and GPU: set once per draw call, read by every fragment invocation, enabling animation (u_time) and interaction (u_mouse).
smoothstep is the key to drawing thin lines: only fragments whose computed distance falls within a narrow edge range get values near 1; everything else stays near 0.
The transcript repeatedly shows that most “wrong output” bugs come from mixing coordinate spaces (pixel vs. normalized vs. NDC) and assuming (0,0) is where it isn’t.

Topics

  • Shaders as Parallel Programs
  • WebGL GLSL Basics
  • Uniforms and Interactivity
  • Coordinate Spaces
  • smoothstep Shaping Functions
  • Time-Based Trig Animation

Mentioned

  • GPU
  • CPU
  • HLSL
  • HSL
  • HSL canvas
  • GLSL
  • WebGL
  • WebGPU
  • NDC
  • SIMD
  • UI
  • CPU
  • GPU
  • GL
  • GL frag color
  • gl_FragCoord
  • GLSL
  • SDF
  • PI