90% Percent Of My Code Is Generated By LLM's
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
LLMs can generate a very large share of code, but the main danger is reduced practice in debugging and system understanding.
Briefing
Large language models can generate the bulk of a developer’s output—sometimes “up to 90%” of code—but the real risk isn’t whether AI is smart enough. The risk is whether developers stop practicing engineering fundamentals and turn into people who mainly “duct tape” together working snippets instead of learning how systems behave when things break.
The discussion starts with a practical claim: after using tools like GitHub Copilot and experimenting with other AI-assisted workflows, the author reports that most of their project code now comes from LLMs. That shift changes how software gets built, especially when an LLM is integrated into everyday work—through automation workflows, custom backend apps, or AI chat interfaces that can execute tasks across a laptop and phone. The emphasis is on availability: LLMs need to be reachable at the moment of work, not locked behind a browser tab.
From there, the argument pivots to limits and workflow design. LLMs have restricted reasoning, incomplete or outdated knowledge, and can’t reliably handle tasks that humans find obvious. Even so, the speaker treats the intelligence debate—whether LLMs are “intelligent” or only have “primitive reasoning”—as less important than job performance: if the tool helps ship software, that’s what matters. But when failures happen—like a confusing React bug or a multi-layer abstraction stack collapsing—developers eventually care about why things went wrong, not just that a patch appeared.
A major theme is that customization and testing matter. Off-the-shelf generations often miss requirements, so the workflow should include system instructions, tailored behaviors, and prompt testing. Tools such as Promptfoo are cited as a way to validate prompts and configurations, while other editor experiences are compared: Cursor’s natural-language code editing is criticized as too vague for precise engineering changes, whereas approaches that specify “what to change” and let the system identify affected files (like Aider) are framed as more reliable.
The most pointed warning is about learning and career direction. If AI handles most routine coding, developers may stop building the intuition that comes from debugging, reading documentation deeply, and understanding tradeoffs. The speaker argues that “10x” productivity can become a trap: it feels like progress while skills atrophy. Instead of speculating about whether AI will replace programmers, the advice is to focus on what can be controlled—using AI to accelerate toward the work a developer actually wants to do, while still studying core areas (the example given is compilers) and maintaining the discipline of becoming a real engineer.
The closing message is blunt: don’t outsource thinking to the point of waiting on tools. When a simple loop or fix is within reach, write it yourself rather than staring at an AI cursor hoping for a response. AI should amplify engineering practice, not replace it.
Cornell Notes
The core idea is that LLMs can generate most of a developer’s code output, but the bigger danger is skill erosion: relying on AI too heavily can reduce debugging practice and understanding of how systems work. The speaker treats debates about whether LLMs are “intelligent” as secondary to whether they help ship working software and handle failures. Effective use depends on workflow choices—keeping LLMs available during work, customizing system instructions, and testing prompts/configurations rather than accepting generic outputs. The advice is to use AI to accelerate learning and pursue long-term engineering goals, while still writing and reasoning through problems directly when tools stall or answers are too vague.
Why does the “90% of code from LLMs” claim matter beyond productivity bragging rights?
What limits of LLMs are treated as practical constraints in day-to-day coding?
How does the workflow shift from “use an LLM” to “engineer with an LLM”?
Why is natural-language editing viewed as risky compared with more constrained change descriptions?
What’s the stance on whether AI will take jobs or whether LLMs are a scam?
What does “don’t let AI lead your career” look like in concrete behavior?
Review Questions
- If an LLM generates most of your code, what specific failure scenarios make understanding the underlying system unavoidable?
- Which parts of an AI-assisted workflow should be customized and tested, and why does generic output often fall short?
- How would you apply the advice “don’t let AI lead your career” to a personal learning plan over the next 6–12 months?
Key Points
- 1
LLMs can generate a very large share of code, but the main danger is reduced practice in debugging and system understanding.
- 2
LLM intelligence debates are treated as less important than whether the tool reliably helps complete real work and recover from failures.
- 3
Effective AI coding requires customization (system instructions/behaviors) and validation (prompt/config testing), not blind acceptance of generated text.
- 4
Natural-language code editing can be too vague for precise engineering changes; constrained change descriptions with file-level edits are often more dependable.
- 5
Developers should focus on controllable actions—how they use AI and what they learn next—rather than predicting whether AI will replace jobs.
- 6
AI productivity gains can become a trap if they replace learning; developers should keep pursuing deeper engineering goals (e.g., compilers).
- 7
When AI stalls on simple tasks, writing the code directly prevents passive dependence and preserves problem-solving skills.