Get AI summaries of any video or article — Sign up free
how to never write bug thumbnail

how to never write bug

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Most bugs fall into a predictable blame ladder: your code first, then teammates, then third-party libraries, then language/compiler edge cases, and occasionally hardware quirks.

Briefing

The fastest way to “never write bug” is to stop treating bugs like personal failures and instead follow a repeatable workflow: read the evidence, narrow the cause, and use tools that reduce guesswork. Most failures trace back to your own code, but the transcript lays out a realistic blame ladder—team mistakes, shady third-party npm libraries, the programming language or compiler, and even hardware edge cases—so debugging becomes a process of elimination rather than rage.

The first universal habit is to slow down and read. Programmers often assume what a function does from its name, then get surprised when it behaves differently; the fix is to consult documentation before changing code. Equally important is learning to interpret error messages instead of “tweaking until it works.” The transcript recommends using stack traces to identify where things broke, then searching with Google in a targeted way: strip out project-specific noise, keep the core keywords, and optionally prepend the language or framework name to narrow results. When search and reading hit a wall, the practical fallback is logging—printing variables, counters, timings, and structured data—using the full logger/console API rather than spamming raw output.

As projects grow, manual logging becomes harder to manage, so the workflow shifts to debuggers. The transcript highlights VS Code’s debugger capabilities: stepping through call stacks to pinpoint the exact moment a bug occurs, pausing execution with breakpoints (without modifying code), and using log points to avoid cluttering the codebase with permanent console statements.

Some bugs resist direct fixes, so the next step is reproduction. The advice is to reduce the problem to a minimal failing example—sometimes by creating a fresh project whose only goal is to trigger the bug—and to reproduce it in a shareable environment like StackBlitz. Reliable reproduction speeds up diagnosis and creates a concrete artifact other developers can use.

When a bug appears after everything used to work, it’s a regression, and the transcript argues that automated testing is the antidote. Test-driven development helps by forcing explicit expectations, teaching what the code actually does, and preventing the same bug from reappearing after future changes. For catching issues before runtime, static analysis is recommended: TypeScript adds a type system that blocks many category errors at compile time, while ESLint can deliver similar quality checks with less investment.

Finally, the transcript offers a human reset: if debugging stalls, take a break. The underlying message is that “never write bug” is impossible, but disciplined debugging—reading, searching, logging, debugging, reproducing, testing, and static analysis—turns failures into manageable, solvable problems.

Cornell Notes

Bugs are unavoidable, but the transcript argues they can be handled systematically by narrowing causes and reducing guesswork. It lays out a practical debugging ladder: start with documentation and error messages, then use targeted Google searches, logging, and IDE debuggers (notably VS Code) to locate the failure precisely. For hard-to-fix issues, it recommends creating a minimal, reliably reproducible example—ideally shareable via StackBlitz. To prevent future regressions, it emphasizes automated testing (including test-driven development) and static analysis using TypeScript or ESLint. The workflow matters because it turns debugging from trial-and-error into evidence-driven engineering.

Why does the transcript insist that reading documentation and error messages comes before “changing code until it works”?

It points out that many failures come from incorrect assumptions—like treating a function name as a guarantee of behavior. Reading documentation prevents surprises such as a “get random number” function returning a predetermined value. For errors, the transcript warns that cryptic messages shouldn’t be ignored; instead, developers should learn to use stack traces to identify the exact location and call path where things broke, which removes guesswork.

How should someone use Google effectively when debugging an error?

The advice is to avoid pasting the entire error plus full stack trace verbatim. Instead, parse the error and remove project-specific details so search results match the underlying issue. If the remaining keywords are generic, prepend the language or framework name to narrow results and reduce irrelevant matches.

What’s the difference between logging and using a debugger, and when should each be used?

Logging is a manual inspection technique: printing variables, counters (e.g., console count), timings, and formatted structures (e.g., tables or hierarchical views like console dir). It works but becomes harder as codebases grow. A debugger scales better because it lets developers pause execution and step through frames in the call stack, pinpointing the moment the bug occurs. In VS Code, breakpoints and log points can be added without permanently polluting code with console statements.

Why does the transcript recommend building a minimal reproduction, and what makes it especially valuable?

Some bugs are difficult to fix directly, so the goal becomes reproducing the failure with the smallest possible setup. Creating a minimal project that triggers the bug increases the chance of finding the root cause and produces a shareable artifact. If the issue can be reproduced in a browser environment like StackBlitz, other developers can run it instantly and help more effectively.

How do automated testing and static analysis prevent bugs from recurring?

For regressions—bugs that appear after previously working code—automated testing provides confidence that changes don’t break existing behavior. Test-driven development helps by forcing explicit expectations and often reveals misunderstandings about what the code actually does. Static analysis adds another layer: TypeScript blocks many type-related mistakes before runtime, while ESLint can improve code quality with less overhead.

What does “Heisenberg” mean in the context of debugging, and why does it matter?

The transcript uses “Heisenberg” to describe bugs that can’t be reliably reproduced—issues that change when observed or only appear under rare conditions. It cites a Crash Bandicoot story where a load/save timeout was eventually reproduced by wiggling a PS1 controller, illustrating how hardware or timing quirks can be the real cause when reproduction is otherwise impossible.

Review Questions

  1. What specific steps should be taken to turn an error message into a targeted search query rather than a full paste into Google?
  2. How do breakpoints and log points in VS Code reduce the need for permanent console logging?
  3. What combination of TDD and static analysis (TypeScript or ESLint) best addresses regressions and type-related mistakes, and why?

Key Points

  1. 1

    Most bugs fall into a predictable blame ladder: your code first, then teammates, then third-party libraries, then language/compiler edge cases, and occasionally hardware quirks.

  2. 2

    Start debugging by reading documentation and interpreting stack traces instead of guessing based on function names or “tweaking until it works.”

  3. 3

    Search errors with Google by stripping project-specific details and adding language/framework keywords when results are too broad.

  4. 4

    Use logging for quick evidence (counters, timers, structured output), but switch to debuggers as complexity grows to pinpoint the exact failing moment.

  5. 5

    When a bug is hard to fix, build a minimal, reliably reproducible example and share it (e.g., via StackBlitz) to speed up collaboration.

  6. 6

    Prevent regressions with automated testing and test-driven development so changes don’t silently reintroduce old failures.

  7. 7

    Catch categories of mistakes early with static analysis: TypeScript for type safety or ESLint for lightweight code-quality enforcement.

Highlights

A practical debugging workflow starts with evidence: read docs, parse error messages, and use stack traces before changing code blindly.
Google searches work best when the query is cleaned—remove unique project noise and keep the core keywords, optionally adding the language/framework name.
VS Code debugging can replace “console everywhere” by using breakpoints and log points to inspect call stacks without code clutter.
Minimal reproduction plus shareable environments like StackBlitz can turn an unsolved bug into a solvable, collaborative one.
Automated testing (especially TDD) and static analysis (TypeScript or ESLint) shift debugging from reactive firefighting to prevention.

Topics

Mentioned