5 simple Claude Code workflows you must have - beginners guide
Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use two root Markdown files—coms.md for collaboration rules and cloth.md for the main system prompt—to coordinate multiple Claude Code agents through a shared action log.
Briefing
Claude Code workflows are presented as a practical way to turn an LLM coding assistant into a repeatable engineering system—especially for beginners. The centerpiece is a multi-agent setup where three separate Claude Code instances coordinate through shared instructions and an action log, letting them build an interactive 3D “mountain terrain” game while each agent takes a defined role. The workflow hinges on two Markdown files in the project root: one describing how agents should collaborate (coms.md) and another acting as the main system prompt for the agents (cloth.md). A “cursor agent” oversees setup and helps tame the complexity of multi-agent behavior, while an action log records what each agent is doing and what comes next. Agents are then launched with a prompt that forces them to write down their next task, execute it, and then read coms.md again to continue—so work doesn’t stall or overlap.
After roles are assigned (agent one, agent two, agent three), the agents run in parallel but are kept aligned by the shared coms.md action log and the explicit instruction to keep going until completion. The guide emphasizes operational controls that make this feasible: running multiple terminals inside Cursor, switching each Claude Code instance to auto-accept, and setting the model (the transcript mentions “opus”). The result is a coordinated pipeline where agent one implements structure, agent two sets up a 3D scene (including FreeJS library usage), and agent three joins to continue the build—demonstrating how multi-agent orchestration can reduce coordination overhead for complex tasks.
The second workflow shifts from building to quality control: using Claude Code to review pull requests in stages. A staged prompt first uses GitHub CLI commands to understand what a PR changes, then checks out the PR’s branch, runs the app to verify behavior, and finally reviews each changed file by name. The example PR adds a 1500-character limit for “ideas” in an app; Claude Code helps confirm the limit is enforced by testing input before and after the change. The staged approach is framed as a way to move faster without losing rigor—especially when PRs are large or subtle. The guide also argues for a hybrid review model: humans catch logic issues while LLMs can spot patterns and potential bugs humans miss.
Three more workflows follow. One uses Claudia, an open-source UI layer for managing multiple Claude Code agents without relying on terminal-heavy workflows; it requires installing Rust and has reportedly reached 8,000+ GitHub stars in three weeks. Another uses MCP (via /mcp in Claude Code) to connect external tools—specifically Vector tasks—by generating an API key and installing an MCP server using an “npm install -g remote” command, then wiring the JSON schema into an MCP client. The final workflow targets maintainability: refactoring large files with a “surgery” mindset. It’s organized into three phases—build a safety net with extensive tests on a separate branch, do surgical planning by identifying low-risk extraction targets (50–150 blocks), and execute incrementally with tests after each extraction to avoid introducing regressions.
Across all five workflows, the throughline is control: shared documentation for multi-agent coordination, staged prompts for PR review, UI or MCP integrations for workflow comfort, and test-driven, incremental refactoring for long-term code health.
Cornell Notes
Claude Code workflows are presented as repeatable systems for beginners, with the strongest emphasis on coordination and safety. The multi-agent workflow uses two root Markdown files—one to define collaboration rules and one as an action log—so three Claude Code instances can build a project together without stepping on each other. Pull request review is handled in stages: first understand the PR via GitHub CLI, then verify behavior by running the app on the PR branch, and finally review each changed file using file-by-file instructions. For productivity, the guide also shows a UI-based approach (Claudia), MCP integration to connect task tools like Vector, and a refactoring method that treats large-file changes like surgery: tests first, careful planning, then incremental extraction with checks after each step.
How does the multi-agent Claude Code setup prevent agents from working at cross-purposes?
Why is staged PR review safer and faster than a single “review everything” prompt?
What concrete example is used to validate the PR review workflow?
How does the Claudia workflow change the experience of running multiple agents?
What does MCP add to Claude Code workflows, and how is Vector integrated?
What are the three phases of the large-file refactoring workflow, and why do they matter?
Review Questions
- In the multi-agent workflow, what specific information must each agent write before it continues to the next step?
- In staged PR review, which step is responsible for verifying behavior in the running application, and what commands or actions support that verification?
- Why does the refactoring workflow insist on incremental extraction with tests after each step instead of one large refactor pass?
Key Points
- 1
Use two root Markdown files—coms.md for collaboration rules and cloth.md for the main system prompt—to coordinate multiple Claude Code agents through a shared action log.
- 2
Assign explicit agent roles (agent one, agent two, agent three) and launch agents with a task-by-task prompt that forces them to re-read coms.md after each completed task.
- 3
Review pull requests in stages: understand the PR via GitHub CLI, verify behavior by running the app on the PR branch, then review changed files individually by filename.
- 4
Speed up PR comprehension by combining LLM review with human logic checks; LLMs can find issues humans miss, while humans catch reasoning and design problems.
- 5
For comfort and scale, consider Claudia’s UI to manage parallel agents, but expect setup friction from installing Rust.
- 6
Integrate external tools with MCP by generating API keys (e.g., in Vector) and wiring the MCP server and JSON schema into an MCP client configuration.
- 7
Treat large-file refactors like surgery: build a strong test safety net, plan low-risk extraction targets first, and execute incrementally with tests after each extraction.