I’m serious.
Based on Theo - t3․gg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Closed-source tools are increasingly viewed as unreliable because AI-era iteration cycles can degrade performance faster than users can respond.
Briefing
Closed-source software is increasingly “slopifying” the tools people rely on—breaking performance, removing control, and accelerating regressions—so open-source access is becoming a practical necessity rather than an ideology. The core complaint isn’t just that source code is unavailable; it’s that modern AI-era development cycles let vendors change things faster, meaning users can’t patch problems themselves and can’t reliably predict whether updates will improve or degrade their workflows.
A key turning point comes from an intern, Yash, who built new features for T3 Chat despite T3 Chat being closed source. Instead of waiting for upstream changes, he reverse-engineered the bundled JavaScript, injected additional packages, and used patch-style workflows to extend functionality—such as adding server-side AI SDK components to enable local model image generation. The most consequential lesson wasn’t the hack itself; it was Yash’s mindset. He didn’t treat boundaries between “where code lives” as sacred. When something didn’t work, he fixed it—often by patching dependencies or filing pull requests across projects—rather than building workarounds that would compound long-term friction.
That experience reframed the narrator’s stance: digging into internals is not only possible, it’s often easier now because AI accelerates implementation. The result is a desire to “muck around” more in the systems people depend on—not to annoy maintainers, but to learn, improve reliability, and keep control when upstream quality slips.
The rant then pivots to a broader pattern of degradation across popular developer tools. Code editors and AI coding assistants—Codex, Cursor (including its “Glass” UI), and Claude Code—are described as repeatedly regressing in performance and stability after updates. Even when teams claim they’re rebuilding from scratch, the narrator reports persistent lag, freezes, crashes, and inconsistent behavior across real-world codebases. Cursor’s response, as relayed, is that usefulness and basic functionality come first, leaving performance as a later priority—an approach the narrator treats as unacceptable given how central these tools are to daily work.
From there comes the thesis: closed-source developers can’t be trusted with AI-enabled leverage. AI makes it easier to ship changes quickly, but that speed also magnifies mistakes. When performance and correctness degrade, users can’t fix the underlying code; they can only endure, switch, or fork.
That’s why the narrator points to open-source as a control mechanism. T3 Code is built with Electron for rendering performance, and key infrastructure is designed so others can fork and maintain the bar if quality drops. The project’s fork activity—tens of thousands of users and thousands of forks—is presented as proof that open sourcing creates accountability and optionality. Similar dynamics show up in other open-source releases like Lawn, where creators and teams fork to add integrations and workflow features.
The argument culminates in a call to keep software customizable by users. Closed-source projects can change terms, performance, and even availability faster than before, while also raising the risk of sudden lock-in. The narrator ends by advocating for open-source solutions not only as a moral stance, but as a business strategy—while urging direct support for underfunded open-source maintainers to prevent “inshitification” of the software ecosystem.
Cornell Notes
The central claim is that closed-source software is becoming unreliable in the AI era: updates arrive faster, performance regresses more often, and users lose the ability to fix issues themselves. A pivotal example is an intern who added major features to closed-source T3 Chat by reverse-engineering bundles and patching dependencies, demonstrating that “workarounds” are often a mindset problem, not a technical necessity. The narrator then connects that lesson to repeated experiences of sloppified performance in tools like Codex, Cursor, and Claude Code, where “rebuilds” still fail to deliver stability. In response, the narrator emphasizes open-source as a reliability and accountability mechanism—especially when projects are forkable and maintain performance through shared infrastructure.
Why does the narrator treat closed-source software as riskier specifically in the AI era?
What did Yash’s work on T3 Chat demonstrate beyond the technical hack?
How does the narrator connect dependency patching (patch-package) to the broader open-source thesis?
What pattern of failures does the narrator report with AI coding tools like Codex and Cursor?
How does open-sourcing T3 Code function as an accountability mechanism?
Why does the narrator criticize Claude Code’s closed-source approach even though it’s a terminal app?
Review Questions
- What specific mechanism does the narrator claim makes closed-source failures worse in the AI era—speed of change, lack of user control, or something else?
- How did Yash’s patching and PR behavior change the narrator’s view of workarounds and code ownership boundaries?
- What role do forks play in the narrator’s argument, and how does T3 Code’s architecture support that forkability?
Key Points
- 1
Closed-source tools are increasingly viewed as unreliable because AI-era iteration cycles can degrade performance faster than users can respond.
- 2
A major mindset shift comes from Yash’s approach: when software doesn’t meet needs, patch or fix directly instead of building approval-heavy workarounds.
- 3
Dependency patching workflows (like patch-package) illustrate how developers can regain control over third-party behavior, including core features like image generation.
- 4
Repeated experiences with Codex, Cursor, and Claude Code are used to argue that closed development priorities can trade stability for “usefulness first,” harming daily productivity.
- 5
Open-source is framed as an accountability system: if quality drops, forks can immediately preserve or improve the baseline.
- 6
T3 Code’s move to Electron is presented as a performance strategy, paired with architectural choices that make alternative forks feasible.
- 7
The narrator argues that the ecosystem should remain customizable by users, because closed vendors can change terms, performance, and availability much faster than before.