Get AI summaries of any video or article — Sign up free
AI has rewired my brain thumbnail

AI has rewired my brain

Theo - t3․gg·
6 min read

Based on Theo - t3․gg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

DeepSeek’s open-source ecosystem helped shift AI from occasional use to a core daily workflow layer, changing how work gets done end-to-end.

Briefing

DeepSeek’s arrival pushed the shift from “AI as a novelty” to “AI as a daily workflow layer,” and that change has rewired how one developer searches, trusts information, chooses technologies, and even rebuilds software. The core claim isn’t that AI makes people smarter; it’s that AI makes them faster at implementation and iteration—so much faster that old bottlenecks (typing, editor friction, slow research, slow refactors) matter less, while system design and judgment matter more.

On the practical side, search habits have changed. The developer now uses AI for quick, specific queries—like turning a highly detailed ffmpeg command into something they can retrieve instantly—while still relying on traditional search skills when problems get complex, niche, or tied to undocumented edge cases (browser quirks, obscure error-code documentation). The key balancing skill is “AI-aware search”: knowing when to ask narrowly for AI to answer well, and when to fall back to Google-style “Google foo” by stripping queries down to the most general useful terms.

AI has also changed how information is evaluated. Instead of trusting claims based on wording, the developer emphasizes trusting “who” more than “what.” That means checking post histories, looking for consistent expertise, and using mental “red flags” to dismiss bad-faith or careless explanations. A detailed example is Apple’s “battery gate,” where the developer argues the reality was battery-health preservation via throttling behavior (slower ramp to peak performance) rather than intentional planned obsolescence. The broader point: AI can make misinformation sound more authoritative by supplying the right vocabulary, so skepticism and domain knowledge still have to do the heavy lifting.

In development, the biggest rewiring is technology selection and building strategy. The developer predicts there won’t be another “T3 stack”-style era-defining framework because today’s tools are trained on the same ecosystems (React, Python, etc.), and AI accelerates iteration so much that small UX or performance gains can’t justify large accuracy losses in AI-assisted tooling. That pushes decisions toward popularity and ecosystem gravity.

More importantly, AI changes how work gets done: it makes “simple, wrong” prototypes cheap to build and cheap to replace. The developer describes a repeatable loop—ship the easy version, hit the wall quickly, then swap in the harder solution once limitations appear. This shows up in architecture choices too: microservices can be beneficial when communication boundaries are clean (using a URL-triggered Cloudflare Workers-style service for image background removal), and monorepos remain painful because AI struggles with huge, deeply connected codebases.

Finally, AI shifts attention from code-writing to system design and DX. With AI handling boilerplate and tedious transformations, the developer spends more time on orchestration, type safety, and integration boundaries. The result is a stronger preference for patterns that are easy for models to follow—functional pipelines, clear inputs/outputs, and TypeScript type feedback—plus a more aggressive willingness to “sledgehammer” broken or overly complex systems because code is now comparatively cheap and engineers are expensive. The overall takeaway: AI doesn’t remove expertise; it raises the value of judgment, architecture, and the ability to model a system in one’s head while moving fast enough to test ideas before they calcify into expensive mistakes.

Cornell Notes

DeepSeek and open-source AI tools accelerated a shift from occasional AI use to a full workflow change: faster search for specific tasks, more skepticism about online claims, and a new engineering loop built around cheap iteration. The developer argues that AI handles simple, specific requests well but struggles with complex, niche, or relationship-heavy problems—so traditional search skills and domain judgment still matter. In software building, AI makes “simple-first” prototypes practical because rebuilding and replacing wrong approaches is faster than before. That pushes architecture toward clear boundaries, smaller contexts (avoiding monorepo sprawl), and system design over boilerplate. The payoff is more time spent on orchestration, DX, and integration correctness—especially with type safety and editor feedback.

Why does AI change search habits without eliminating the need for traditional research skills?

AI performs best on “simple but specific” queries—like retrieving an extremely detailed ffmpeg command—where a narrow prompt can directly yield the needed output. But when issues become complex, niche, or tied to undocumented behavior (for example, obscure error-code documentation or weird Safari behaviors), AI can be too slow and wrong to rely on. The developer keeps “Google foo” as a skill: start with the error code, then progressively remove specifics if results fail, learning how to craft queries that are general enough to surface the right documentation.

What does “trust the who, not the what” look like in an AI-heavy information environment?

The developer treats credibility as something you verify through signals, not phrasing. On Reddit, two replies can sound equally confident; the difference is often in post history and demonstrated expertise aligned with the reader’s context. The “who” matters because AI can help people sound smarter by using correct vocabulary—even when their underlying understanding is wrong. A concrete example is “battery gate”: the developer distinguishes throttling behavior meant to preserve battery health from the “planned obsolescence” framing, arguing that misinformation spreads when people weaponize partial truths.

Why predict there won’t be another “T3 stack” era-defining framework?

The developer links staying power to training data and ecosystem gravity. Current mainstream tools (React, Python, and their surrounding reference material) are deeply embedded in what AI systems learn from. React’s ecosystem and the volume of open-source work make it unlikely to be displaced, and AI-assisted development raises the cost of switching if it reduces AI accuracy. The claim isn’t that React is perfect; it’s that AI makes small gains insufficient when they come with large accuracy tradeoffs.

How does AI change the cost-benefit of building “simple, wrong” solutions?

AI reduces the time to implement and test, so the developer can iterate through wrong paths faster. The loop becomes: build the easy version, discover failures quickly, then replace with the harder correct solution. This is framed as cheaper than spending months perfecting architecture in advance. The developer also argues that many apps never reach large scale; having a simple solution that works for early users (often “you and your mom,” or a small dedicated base) is enough to validate the idea before scaling complexity.

Why are microservices sometimes better—and sometimes worse—under AI-assisted development?

Microservices are beneficial when boundaries are clean and communication is minimal. The developer’s image background removal example uses a URL-triggered service (image.engineering) that takes an image URL and returns a processed image, enabling horizontal scaling and simplifying the main app. Microservices become painful when the wrong cut point creates complex dependencies. The lesson: AI makes it easier to build and refactor, but architecture still determines whether the system stays understandable.

What role do type safety and functional structure play in making AI coding more reliable?

AI struggles with mapping complex relationships across codebases, especially when logic is deeply nested across many files. Patterns that are easy to read top-to-bottom—functional pipelines with clear inputs and outputs—are easier for models to follow. TypeScript type feedback and well-defined return types help the editor and AI catch mistakes earlier. The developer also emphasizes full-stack integration: keeping frontend and backend in the same repo can let type errors surface when API fields change, reducing breakage from AI-suggested refactors.

Review Questions

  1. Where should AI-assisted search be trusted, and where should it be treated as unreliable—according to the developer’s criteria?
  2. Explain the “simple-first, replace after hitting the wall” loop and why AI changes its economics.
  3. What architectural signals make a codebase easier for AI tools to work with, and what signals make them struggle?

Key Points

  1. 1

    DeepSeek’s open-source ecosystem helped shift AI from occasional use to a core daily workflow layer, changing how work gets done end-to-end.

  2. 2

    AI is strongest for simple, specific queries, but traditional search and domain knowledge remain essential for complex, niche, or undocumented problems.

  3. 3

    Online credibility increasingly depends on verifying the person (“who”) rather than trusting confident phrasing (“what”), because AI can make misinformation sound more authoritative.

  4. 4

    Technology choice is increasingly shaped by ecosystem gravity and AI training data; the developer doubts another era-defining “stack” will emerge like T3.

  5. 5

    AI makes “simple, wrong” prototypes cheap to build and cheap to replace, encouraging rapid iteration instead of long upfront perfection.

  6. 6

    Clean architectural boundaries matter more than ever: microservices can help when communication is minimal, while monorepos can hinder AI due to context overload.

  7. 7

    Type safety and editor-integrated feedback become borderline essential for scaling AI-assisted development without breaking integrations.

Highlights

The developer now uses AI for highly specific retrieval tasks (like ffmpeg command generation) more than Google, but still falls back to traditional search for niche documentation and browser edge cases.
A major theme is skepticism: AI can dress up bad-faith or incorrect claims with the right vocabulary, so “who” and evidence matter more than confident wording.
AI changes engineering economics: rebuilding wrong approaches is faster, making “simple-first” iteration a practical default rather than a risky shortcut.
Microservices aren’t automatically better or worse; they work when the cut point creates minimal, well-defined communication—like a URL-based image-processing service.
With AI handling boilerplate, the developer’s attention shifts toward system design, DX, and type-safe integration boundaries rather than writing every detail by hand.

Topics

  • AI Workflow
  • Information Trust
  • Technology Selection
  • System Design
  • Iteration Strategy

Mentioned