Get AI summaries of any video or article — Sign up free
Build Anything with Claude Code, Here’s How thumbnail

Build Anything with Claude Code, Here’s How

David Ondrej·
5 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Create and maintain a single project description markdown file to anchor Claude Code’s understanding of goals, stack choices, and MVP scope.

Briefing

Claude Code can be used to build a working AI-powered CRM end-to-end—even for non-experts—if the workflow is treated like engineering: set up automation (hooks/commands/sub-agents), write a clear project spec, choose an efficient tech stack, and use disciplined prompting plus context engineering. The core takeaway is that speed comes from repeatable structure and “grounding” (documentation checks and targeted debugging), not from letting the AI freewheel.

The walkthrough starts with a fresh project and immediately creates a single “project description.md” file to anchor the agent’s understanding. That file defines the goal: a web-based CRM with an AI chatbot on the left and customer data management on the right. From there, Claude Code is switched into Plan mode to generate an initial stack and codebase structure. A key decision is documenting the chosen text stack (Next.js, Tailwind, Supabase, OpenAI model, and Vercel AI SDK) and then refining the architecture to match an MVP timeline rather than a decades-long production system.

To make Claude Code reliable, the setup is optimized with custom hooks and prompt suffixes. The creator emphasizes that small prompt discipline compounds: adding “think hard” and “answer in short” increases reasoning effort while keeping outputs concise, and “ultra think” is reserved for the highest-stakes steps. Commands and hooks also reduce friction by automating recurring instructions, while “/clear” and “/compact” prevent context rot by resetting or summarizing conversation history.

A major portion of the build becomes a debugging and documentation exercise. The agent generates a Next.js app skeleton, then the process shifts to fixing errors caused by environment and configuration issues (like directory naming with spaces) and by model/version mismatches. The workflow repeatedly uses web grounding via Perplexity-style research to pull up-to-date documentation, because model knowledge can lag behind current library releases. When the Vercel AI SDK changes (notably moving toward V5), the build breaks in ways that require migration-specific fixes—such as message format differences between front-end “chat” and back-end “model message” expectations.

To accelerate troubleshooting, the creator adds tooling and delegation. A Playwright MCP server is used so Claude Code can drive the UI like an automated tester—clicking controls, inspecting network requests, and isolating why the AI isn’t responding or why streaming output isn’t rendering. Sub-agents are also introduced for specialized tasks like web research validation, allowing the main agent to delegate multi-source checks while continuing implementation.

The CRM eventually reaches a functional baseline: the UI contrast issues are corrected, the AI response streaming is fixed, and the system can create contacts in the CRM database. The final result is less a “one-shot build” and more a reusable engineering system: internal documentation files (including V4 vs V5 notes) grow over time, commands and hooks are refined, and the project is pushed to GitHub with safe handling of secrets. The broader message is that the real leverage comes from building an AI-first workflow that improves continuously—through debugging, documentation, and automation—until the agent can reliably ship features faster than traditional solo development.

Cornell Notes

Claude Code can build a full AI-powered CRM when the work is structured like software engineering: a clear project spec, an optimized Claude Code setup (hooks/commands/sub-agents), and disciplined context management. The build succeeds by repeatedly grounding decisions in current documentation using web research, then iterating through targeted debugging rather than one-shot fixes. Version mismatches—especially around the Vercel AI SDK moving toward V5—cause breakages that require migration-specific changes, including front-end vs back-end message format differences. Playwright MCP is used to automate UI testing and isolate why the AI isn’t responding or why streaming output fails to render. Over time, internal markdown docs accumulate so future errors can be resolved faster with less re-research.

What are the four “pillars” for using Claude Code effectively, and how do they show up in the CRM build?

The pillars are: (1) Claude Code setup (hooks, commands, prompts, sub-agents, and settings.json automation), (2) clear project intent (a concise spec of what the CRM should do), (3) developer skill (understanding the stack, debugging errors, and making architecture tradeoffs), and (4) prompting/context engineering knowledge (guiding the model with the right constraints and reasoning effort). In the build, the CRM goal is written into project description.md, Plan mode is used for early architecture decisions, custom hooks enforce “think hard / answer in short,” and debugging relies on reading errors and using Playwright MCP to test UI behavior.

Why does the creator insist on a main markdown file like project description.md, and what does it accomplish?

The main markdown file acts as a persistent “source of truth” for the agent. It lets Claude Code quickly understand the project’s purpose, core features, chosen text stack, and time horizon (MVP vs long-term production). The build repeatedly updates this file—documenting stack decisions and later adding V4 vs V5 migration notes—so future runs start with better context and fewer repeated mistakes.

How do hooks and prompt suffixes change outcomes during implementation and debugging?

Hooks automate recurring instructions and improve consistency. Examples include adding “think hard answer in short” to increase reasoning effort while keeping responses concise, and using “ultra think” for high-stakes steps. There are also hooks for error-trace explanations (so the agent can interpret stack traces faster) and for forcing specific reasoning effort levels. The result is faster iteration: fewer manual prompt edits and more reliable debugging behavior.

What role does web grounding (Perplexity-style research) play when library versions change?

Model knowledge can lag behind current releases, so documentation checks prevent outdated guidance. The build repeatedly pulls latest docs for the Anthropic API and for the Vercel AI SDK, then compares what the code is doing against official guidance. When Vercel AI SDK V5 message formats differ from what the agent expects, web research plus migration notes become essential to fix errors and restore correct streaming and message handling.

How does Playwright MCP help isolate front-end bugs compared with manual debugging?

Playwright MCP lets Claude Code operate the UI like an automated test: navigating to pages, clicking buttons, typing prompts, and inspecting network requests and console output. This is used to determine whether the AI request is being sent, whether responses arrive, and why streaming output isn’t rendering. Instead of guessing, the agent can observe concrete browser-side behavior and iterate on the smallest likely causes.

What specific kind of failure appears during the Vercel AI SDK migration, and how is it addressed?

A recurring failure is that front-end chat expects one message format while back-end expects “model message” format, leading to errors like “invalid prompt” or schema/require-schema issues. The fix strategy is to read V5 migration docs, update internal context (cloud.md / docs files), and adjust the code so the UI-to-back-end message conversion matches V5 expectations. The build also documents these differences so future fixes are faster.

Review Questions

  1. When would you use Plan mode versus auto-accept mode in a Claude Code workflow, and why?
  2. How do “/clear” and “/compact” reduce context rot, and what kinds of changes trigger their use?
  3. What debugging signals would you look for first when the UI shows no AI response: network requests, console errors, or back-end logs—and why?

Key Points

  1. 1

    Create and maintain a single project description markdown file to anchor Claude Code’s understanding of goals, stack choices, and MVP scope.

  2. 2

    Optimize Claude Code with hooks, commands, and sub-agents so repeated instructions and debugging workflows happen automatically.

  3. 3

    Use Plan mode for architecture and major decisions; switch to cheaper/faster modes for implementation once the structure is stable.

  4. 4

    Ground critical implementation details in up-to-date documentation via web research, especially when libraries (like Vercel AI SDK) change versions.

  5. 5

    Treat debugging as stepwise engineering: isolate the smallest failing component, add targeted logs, and verify behavior with automated UI testing when possible.

  6. 6

    Use Playwright MCP to reproduce user actions and inspect network/console behavior, turning “it doesn’t work” into observable evidence.

  7. 7

    Accumulate internal documentation (including V4 vs V5 migration notes) so future runs don’t repeat the same research and fixes.

Highlights

The build’s speed comes from repeatable structure: a persistent project spec plus custom hooks that enforce reasoning effort and concise outputs.
Version mismatches—especially Vercel AI SDK V5 message-format changes—break the app in non-obvious ways, requiring migration-specific fixes and documentation updates.
Playwright MCP turns Claude Code into an automated front-end debugger by clicking, typing, and inspecting network requests until the AI response pipeline works.
Internal markdown docs grow into a “memory layer,” letting future fixes start from known migration rules rather than re-researching.

Topics

Mentioned