Don't Clean Code w/ Creator of HTMX
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Carson Gross argues that clean-code rules—especially “always keep methods small”—are often treated as universal laws without strong empirical support.
Briefing
Carson Gross’s “Coding Dirty” pitch challenges the software industry’s default worship of “clean code” rules—especially the idea that small functions, heavy unit testing, and aggressive abstraction are always superior. Gross argues that many clean-code prescriptions are treated like universal laws without solid empirical support, and that real-world maintainability often improves when developers allow larger functions, test at the right level, and avoid abstraction that adds cognitive cost without delivering concrete benefits.
Gross’s first major target is the “keep methods tiny” doctrine. He points to research summarized in “Clean Code” and additional later work suggesting that longer methods can correlate with higher quality metrics—such as fewer bugs per line of code—and that the commonly cited “short is better” guidance often lacks rigorous, example-driven justification. He also offers practical reasons: large functions can be easier to read top-to-bottom, easier to debug because the full context stays in one place, and sometimes even safer to change because there’s less risk of “leaking” behavior through many small helper calls. In his view, splitting logic into dozens of micro-functions can create an “inversion of control” maze—strategy patterns and indirection that force developers to hop across implementations, losing the original context and making step debugging feel like whack-a-mole.
That doesn’t mean decomposition is useless. Gross draws a line between extracting code for reuse or managing complexity versus fragmenting code “for their own sake.” Names and abstractions help humans manage complexity, but they don’t magically preserve truth. A function name inevitably compresses details—edge cases, omitted behavior, and hidden assumptions—so developers can end up with a false sense of what the code actually does. The remedy isn’t to ban small functions or classes; it’s to treat them as tools with costs, not as free correctness.
Testing becomes the second battleground. Gross says he’s not anti-testing, but skeptical of unit tests as a development-driving ideology in the original TDD sense. His preference is situational: during early feature work, developers often don’t know the right internal structure yet, so exhaustive testing should focus on the API-level “cut point” and integration behavior rather than micro-testing every internal function. Unit tests can still make sense when extending an existing system that already has a test harness. He also warns against test-suite bloat: too many end-to-end tests become non-deterministic, get ignored, and can slow refactors. The goal is a cost-benefit balance—tests are only valuable if they reduce production bugs, prevent catastrophic failures, or speed development more than they consume engineering time.
The conversation closes by reinforcing the theme: coding guidance should be applied with judgment, not ideology. Gross frames “Coding Dirty” as a counterweight to clean-code dogma—arguing for “beautiful at any size,” for abstraction when it earns its keep, and for testing that tracks real business outcomes rather than brittle internal state. Alongside the technical debate, the episode also turns into a meme-and-standup detour, including a discussion of why a carefully crafted meme about “getting lit on a Monday night” failed to land—an echo of the episode’s broader message that rules and assumptions don’t always survive contact with reality.
Cornell Notes
Carson Gross’s “Coding Dirty” challenges clean-code commandments by arguing that many rules (like “always use small functions” and “drive development with unit tests”) are applied too ideologically. He cites research suggesting longer methods can be higher quality and explains why big functions can be easier to read, debug, and change—especially when small-function decomposition creates indirection and context loss. On testing, he favors exhaustive checks at the right level: API/integration tests during early design, and targeted unit tests when extending well-covered systems. The throughline is cost-benefit thinking: abstraction and tests help only when they reduce real risk or speed change, not when they add cognitive load or lock teams into brittle expectations.
Why does Gross defend larger functions when “clean code” often demands short methods?
What’s the risk of small functions and heavy decomposition, according to Gross?
How does Gross distinguish when unit tests are useful versus when they’re counterproductive?
What does Gross mean by “abstraction isn’t cost-free”?
Why does Gross say test assertions can become brittle when they focus on internal state?
Review Questions
- Which quality metrics and study findings does Gross cite to challenge the “short methods are always better” rule?
- How does Gross’s argument about function names relate to his critique of abstraction and decomposition?
- What criteria should determine whether to add unit tests, integration tests, or end-to-end tests in a refactor-heavy project?
Key Points
- 1
Carson Gross argues that clean-code rules—especially “always keep methods small”—are often treated as universal laws without strong empirical support.
- 2
Research cited in “Coding Dirty” and related work suggests longer methods can correlate with higher quality metrics, including fewer bugs per line of code.
- 3
Large functions can improve readability and debugging by preserving full context, and they may reduce change risk by limiting indirection and “leakage” across many call sites.
- 4
Function and class names compress information; even well-chosen names can mislead because they can’t encode every edge case and omitted behavior.
- 5
Gross favors situational testing: exhaustively test API/integration behavior early, use unit tests when extending well-instrumented systems, and avoid test-suite bloat.
- 6
End-to-end tests must be maintained and focused; too many become non-deterministic, get ignored, and slow refactors.
- 7
The guiding principle is cost-benefit: abstraction and tests are only “good engineering” when they reduce real risk or speed change more than they consume time.