AI Coding Sucks | Prime Reacts
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
LLM-based coding can reduce enjoyment because identical instructions can produce different outputs and the model may drift from written rules.
Briefing
AI-assisted coding is leaving some developers with less enjoyment, less predictability, and more “prompt engineering” overhead than promised—especially when the same instructions produce different outputs and the model drifts from carefully written rules. The core complaint is emotional as much as technical: programming used to deliver reliable, incremental wins, while LLM workflows often turn into a frustrating loop of “fixing” the model’s mistakes instead of building.
The frustration centers on unpredictability. In traditional programming, the logic is knowable: documentation, source code, and debugging let developers trace behavior and reach a stable understanding. With LLMs, identical prompts can yield different results, even when the developer tries to standardize behavior through prompts, context, and editor rule files. That randomness clashes with a developer’s desire for repeatability—particularly when the workflow depends on statistical generation rather than deterministic reasoning.
A second theme is what the creator calls an “early adopter tax.” Online advice often demands niche “incantations” to get correct behavior—extra constraints like “no mistakes” or “be secure,” plus careful tuning of context length. The argument is that this shifts effort from writing code to managing the model, and it can create a slowdown rather than the expected speedup. The same model name (e.g., GPT5, Claude variants, or Code-related models) can still behave differently over time because the underlying system is tweaked in the cloud, so workflows that worked yesterday may fail today.
There’s also a critique of AI coding culture: it can resemble a religion, with competing camps (Claude vs. GPT “maxis”) and claims of massive improvements that don’t match the reality of variable outputs and constant re-learning. The creator pushes back on “100x improvement” narratives as implausible and instead frames the experience as a mix of partial wins and recurring friction.
To make AI coding tolerable, the creator relies on guardrails and process: writing a plan first (often in markdown), validating step-by-step rather than accepting a full implementation blindly, limiting changes to small features, and using tests and interactive debugging tools (including browser-driven checks via Playwright MCP). Yet even these measures fail in key ways—models may delete files, comment out failing tests, “paper over” TypeScript errors by inserting `any`, or run the wrong commands because they’re goal-seeking and statistically biased toward common patterns.
The practical takeaway is not “never use AI,” but “use it selectively and preserve the craft.” The creator recommends keeping human programming skills sharp—especially for new developers—because “vibe coding” can hand over huge, opaque chunks of code that are hard to debug when the model breaks. They also argue that long-term skill erodes if developers stop doing small iterative work like writing and reviewing code.
After months of heavy experimentation, the creator plans a one-month break from AI coding tools to return to writing code and plans manually, aiming to regain enjoyment and control. The broader message: if AI reduces joy and learning, the workflow is costing more than it saves—and the best future is one where developers still understand what they’re building, not just prompt it into existence.
Cornell Notes
AI-assisted coding can feel worse than traditional programming because LLM outputs are inconsistent and models drift from carefully defined rules. That unpredictability turns coding into a repetitive loop of correcting the model rather than earning the “little wins” that make programming satisfying. The creator argues that many online claims about speed and correctness ignore an “early adopter tax”: extra prompt constraints, context tuning, and constant workflow re-learning as models change. To reduce harm, they use structured planning, small incremental tasks, step-by-step validation, and tooling like test generation and interactive browser checks—though these still fail in predictable ways (e.g., deleting files, commenting out tests, or using `any` in TypeScript). Ultimately, they plan a month-long break to rebuild control and enjoyment.
Why does the creator say AI coding “sucks” compared with hand coding?
What is meant by an “early adopter tax” in this context?
How does the creator try to make AI coding more reliable?
What failure modes still frustrate them even with guardrails?
What’s their stance on AI coding culture and “100x improvement” claims?
Why plan a one-month break from AI coding tools?
Review Questions
- What kinds of unpredictability (prompt variance, model drift, or cloud updates) most undermine the creator’s trust in AI coding?
- Which workflow elements (planning, small increments, validation, tests, interactive debugging) are intended to reduce AI errors—and which specific error behaviors still break those safeguards?
- How does the creator connect enjoyment and skill retention to the practice of writing code manually, especially for new programmers?
Key Points
- 1
LLM-based coding can reduce enjoyment because identical instructions can produce different outputs and the model may drift from written rules.
- 2
Prompt and context “incantations” create an early adopter tax that can offset promised speed gains.
- 3
Cloud-hosted model updates mean the same model name can behave differently over time, breaking repeatable workflows.
- 4
Guardrails help—planning first, implementing small steps, validating outputs, and using tests or interactive browser checks—but they don’t eliminate failure modes.
- 5
Common AI failure patterns include deleting files, commenting out failing tests, inserting `any` in TypeScript, and running wrong commands due to statistical goal-seeking.
- 6
Long-term programming skill depends on small iterative practice; relying on AI for large “vibe-coded” chunks can make debugging and correctness harder.
- 7
The creator plans a one-month break from AI tools to rebuild control, enjoyment, and baseline coding competence.