Get AI summaries of any video or article — Sign up free
5 simple Claude Code workflows you must have - beginners guide thumbnail

5 simple Claude Code workflows you must have - beginners guide

David Ondrej·
6 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use two root Markdown files—coms.md for collaboration rules and cloth.md for the main system prompt—to coordinate multiple Claude Code agents through a shared action log.

Briefing

Claude Code workflows are presented as a practical way to turn an LLM coding assistant into a repeatable engineering system—especially for beginners. The centerpiece is a multi-agent setup where three separate Claude Code instances coordinate through shared instructions and an action log, letting them build an interactive 3D “mountain terrain” game while each agent takes a defined role. The workflow hinges on two Markdown files in the project root: one describing how agents should collaborate (coms.md) and another acting as the main system prompt for the agents (cloth.md). A “cursor agent” oversees setup and helps tame the complexity of multi-agent behavior, while an action log records what each agent is doing and what comes next. Agents are then launched with a prompt that forces them to write down their next task, execute it, and then read coms.md again to continue—so work doesn’t stall or overlap.

After roles are assigned (agent one, agent two, agent three), the agents run in parallel but are kept aligned by the shared coms.md action log and the explicit instruction to keep going until completion. The guide emphasizes operational controls that make this feasible: running multiple terminals inside Cursor, switching each Claude Code instance to auto-accept, and setting the model (the transcript mentions “opus”). The result is a coordinated pipeline where agent one implements structure, agent two sets up a 3D scene (including FreeJS library usage), and agent three joins to continue the build—demonstrating how multi-agent orchestration can reduce coordination overhead for complex tasks.

The second workflow shifts from building to quality control: using Claude Code to review pull requests in stages. A staged prompt first uses GitHub CLI commands to understand what a PR changes, then checks out the PR’s branch, runs the app to verify behavior, and finally reviews each changed file by name. The example PR adds a 1500-character limit for “ideas” in an app; Claude Code helps confirm the limit is enforced by testing input before and after the change. The staged approach is framed as a way to move faster without losing rigor—especially when PRs are large or subtle. The guide also argues for a hybrid review model: humans catch logic issues while LLMs can spot patterns and potential bugs humans miss.

Three more workflows follow. One uses Claudia, an open-source UI layer for managing multiple Claude Code agents without relying on terminal-heavy workflows; it requires installing Rust and has reportedly reached 8,000+ GitHub stars in three weeks. Another uses MCP (via /mcp in Claude Code) to connect external tools—specifically Vector tasks—by generating an API key and installing an MCP server using an “npm install -g remote” command, then wiring the JSON schema into an MCP client. The final workflow targets maintainability: refactoring large files with a “surgery” mindset. It’s organized into three phases—build a safety net with extensive tests on a separate branch, do surgical planning by identifying low-risk extraction targets (50–150 blocks), and execute incrementally with tests after each extraction to avoid introducing regressions.

Across all five workflows, the throughline is control: shared documentation for multi-agent coordination, staged prompts for PR review, UI or MCP integrations for workflow comfort, and test-driven, incremental refactoring for long-term code health.

Cornell Notes

Claude Code workflows are presented as repeatable systems for beginners, with the strongest emphasis on coordination and safety. The multi-agent workflow uses two root Markdown files—one to define collaboration rules and one as an action log—so three Claude Code instances can build a project together without stepping on each other. Pull request review is handled in stages: first understand the PR via GitHub CLI, then verify behavior by running the app on the PR branch, and finally review each changed file using file-by-file instructions. For productivity, the guide also shows a UI-based approach (Claudia), MCP integration to connect task tools like Vector, and a refactoring method that treats large-file changes like surgery: tests first, careful planning, then incremental extraction with checks after each step.

How does the multi-agent Claude Code setup prevent agents from working at cross-purposes?

It relies on shared project-root documentation and an action log. coms.md defines how agents should collaborate, while cloth.md acts as the main system prompt. Each agent writes what it’s doing and what should happen next into the action log, then reads coms.md again before continuing. Agents are also explicitly assigned roles (agent one, agent two, agent three) and launched with a prompt that forces task-by-task execution: identify the agent, state the next task, complete it, then re-check coms.md to keep the workflow moving.

Why is staged PR review safer and faster than a single “review everything” prompt?

Staging breaks review into checkpoints that match real engineering risk. Stage one runs GitHub CLI commands to understand the PR’s intent and changes. Stage two checks out the PR branch and verifies behavior by running the app and testing key scenarios. Stage four then reviews changed files one by one using the filenames, which is especially helpful when PRs are complex or long—because it reduces the cognitive load and speeds up understanding of diffs.

What concrete example is used to validate the PR review workflow?

A PR that adds a 1500-character limit for “ideas.” The workflow verifies the behavior by testing input before and after the change: after applying the PR, attempts to exceed the limit are cut off at 1500 characters, confirming the enforcement works as intended.

How does the Claudia workflow change the experience of running multiple agents?

Claudia provides a UI for managing multiple Claude Code agents in parallel, reducing reliance on terminal-heavy setup. The tradeoff is that it requires installing Rust. The guide positions it as a good option for people who want multi-agent orchestration without living inside terminals and IDE panels.

What does MCP add to Claude Code workflows, and how is Vector integrated?

MCP lets Claude Code connect to external tools through standardized server configurations. The guide uses /mcp to manage MCP servers, then integrates Vector by generating an API key in Vector’s advanced settings and following a two-step setup: run an MCP server install command (shown as “npm install -g remote”) and copy a JSON schema into an MCP client configuration (Cursor or Claude Code’s MCP client). Once connected, Claude Code can run tasks stored in Vector instead of using separate task managers.

What are the three phases of the large-file refactoring workflow, and why do they matter?

Phase 1 is a safety net: write tests covering close to 100% of behavior and refactor on a separate branch (or staged branches) to avoid breaking production. Phase 2 is surgical planning: identify complexity hotspots and choose the lowest-risk extraction targets first (the prompt suggests extracting 50–150 blocks that are safest). Phase 3 is incremental execution: extract in small chunks and run tests after each extraction, rather than attempting a one-shot refactor that can introduce regressions and become unmanageable.

Review Questions

  1. In the multi-agent workflow, what specific information must each agent write before it continues to the next step?
  2. In staged PR review, which step is responsible for verifying behavior in the running application, and what commands or actions support that verification?
  3. Why does the refactoring workflow insist on incremental extraction with tests after each step instead of one large refactor pass?

Key Points

  1. 1

    Use two root Markdown files—coms.md for collaboration rules and cloth.md for the main system prompt—to coordinate multiple Claude Code agents through a shared action log.

  2. 2

    Assign explicit agent roles (agent one, agent two, agent three) and launch agents with a task-by-task prompt that forces them to re-read coms.md after each completed task.

  3. 3

    Review pull requests in stages: understand the PR via GitHub CLI, verify behavior by running the app on the PR branch, then review changed files individually by filename.

  4. 4

    Speed up PR comprehension by combining LLM review with human logic checks; LLMs can find issues humans miss, while humans catch reasoning and design problems.

  5. 5

    For comfort and scale, consider Claudia’s UI to manage parallel agents, but expect setup friction from installing Rust.

  6. 6

    Integrate external tools with MCP by generating API keys (e.g., in Vector) and wiring the MCP server and JSON schema into an MCP client configuration.

  7. 7

    Treat large-file refactors like surgery: build a strong test safety net, plan low-risk extraction targets first, and execute incrementally with tests after each extraction.

Highlights

Multi-agent coordination is driven by shared documentation: coms.md acts as an action log that agents must consult after every completed task.
Staged PR review includes both understanding (GitHub CLI) and verification (running the app on the PR branch), then file-by-file analysis using changed filenames.
Claudia offers a UI alternative for multi-agent management, requiring Rust but reducing terminal dependence.
MCP integration connects Claude Code to task systems like Vector by generating an API key and installing/configuring an MCP server.
Large-file refactoring is framed as a three-phase, test-first surgical process with incremental extraction to prevent regressions.

Topics

  • Multi-Agent Claude Code
  • Staged Pull Request Review
  • Claudia UI
  • MCP Integration
  • Large File Refactoring

Mentioned