So I Tried To Learn Shaders...
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Shaders compute per-pixel colors in parallel, so effects are expressed as math over coordinates and uniforms rather than sequential state updates.
Briefing
Shaders become understandable once they’re treated as massively parallel “pixel programs”: a fragment shader runs for every pixel on the screen, using the pixel’s screen coordinates to compute a color. The core learning arc here is moving from confusion about how pixels get their inputs (and how “blur” works) to a working mental model of GPU execution: thousands to millions of tiny tasks run in parallel, each one largely independent, which is why shader code is written as functions of position and time rather than step-by-step logic.
The transcript starts with a personal history of failing to grasp shader math from a book example years earlier—light swinging to create shadows and penumbras. That earlier failure becomes the motivation for a gentler rebuild: what a shader is, how it differs from CPU-style sequential programs, and why GPUs are built for parallel throughput. A CPU metaphor of “pipes” and “threads” leads into the practical reality: every frame requires computing color for hundreds of thousands of pixels (e.g., 800×600 is 480,000 pixels per frame), and high resolutions multiply that workload dramatically. This is the reason shaders feel “fast” when they’re written in the right style: the GPU architecture is designed to keep many small operations busy at once.
From there, the learning shifts into concrete WebGL/GLSL mechanics. The transcript introduces GLSL basics (vertex vs. fragment shaders), the idea of built-in variables like GL frag color and gl_FragCoord, and the importance of precision qualifiers (e.g., medium float). It also emphasizes that shader debugging is awkward: there’s no normal console logging, and mistakes often show up as a white screen or compilation errors. The practical workaround is iterative experimentation—changing values, forcing strong colors, and using shader logs when available.
A major milestone is getting a “hello world” style output: cornflower blue via a minimal WebGL pipeline (create context, compile shaders, link a program, draw). Then comes the real conceptual payoff: uniforms. Uniforms are read-only inputs set by the host (JavaScript) that remain constant across all fragment invocations during a draw call. The transcript uses uniforms like u_time, u_resolution, and u_mouse to drive animation and interaction, explaining how the same shader code can produce different results depending on those global inputs.
The transcript then dives into coordinate spaces and why results look “wrong” until the right space is used. It distinguishes vertex coordinates, fragment coordinates, and normalized device coordinates, and repeatedly corrects assumptions about where (0,0) lies and what range gl_FragCoord uses. Once coordinates are correct, the shader becomes a playground for math-driven visuals: distance fields (distance from the mouse), gradients, and time-based sine waves.
Finally, the transcript explores shaping functions—especially smoothstep—as a way to draw crisp lines and controlled transitions without branching. The “fence” exercise demonstrates how smoothstep creates an S-curve transition around a threshold, producing a thin green line where a condition like y≈x holds, while the rest of the screen stays mostly unchanged. The session ends with the broader takeaway: shaders are less about “debugging a program” and more about mastering math-to-image transformations, using parallelism-friendly functions of position, time, and uniforms to build increasingly complex visuals.
Cornell Notes
The transcript builds a working understanding of shaders by treating them as parallel programs that compute a color for every pixel. It contrasts CPU-style sequential thinking with GPU execution, where each fragment shader invocation is independent and driven by inputs like position, time, and resolution. The session focuses on WebGL/GLSL fundamentals: vertex vs. fragment shaders, built-in variables (like gl_FragCoord), and especially uniforms, which are read-only values set by JavaScript and constant across a draw call. It also highlights shader debugging realities (white screens, compilation errors, limited logging) and uses iterative experiments to learn. Finally, it demonstrates how smoothstep and other shaping functions turn simple math (distance, thresholds, sine waves) into crisp lines and smooth transitions.
Why does shader code often look like “a function of position” rather than step-by-step logic?
What are uniforms in GLSL, and why are they crucial for interaction and animation?
How do coordinate spaces cause “it looks wrong” moments in shader work?
What does smoothstep do, and why does it create a thin line instead of coloring the whole screen?
Why is shader debugging harder than typical CPU debugging?
How do time and trig functions (sin/cos) become animation tools in shaders?
Review Questions
- When would you use a uniform instead of a varying, and what does each imply about whether values can differ per fragment?
- Explain how smoothstep(edge0, edge1, x) determines whether a fragment is “on” or “off” in the fence/line example.
- If gl_FragCoord is in pixel/window coordinates, what steps are needed to compute a distance to the mouse in a normalized coordinate system?
Key Points
- 1
Shaders compute per-pixel colors in parallel, so effects are expressed as math over coordinates and uniforms rather than sequential state updates.
- 2
GPUs are optimized for many independent threads; fragment shader invocations generally can’t depend on each other during a draw call.
- 3
WebGL/GLSL learning is easiest by building a minimal pipeline first (compile/link shaders, draw), then iteratively adding uniforms and math.
- 4
Uniforms (e.g., u_time, u_resolution, u_mouse) are read-only values set by the host and constant across all fragments in a draw call.
- 5
Correct coordinate-space handling is essential: gl_FragCoord is window/pixel space, so normalization with u_resolution is often required for consistent gradients and distance fields.
- 6
Shader debugging is mostly visual and compile-error driven; forcing strong colors and changing one variable at a time is a practical strategy.
- 7
Shaping functions like smoothstep turn threshold-like conditions into smooth, controllable transitions, enabling thin lines and crisp bands without branching.