INSANE Parallel Coding with Claude Code + Cursor MCP Server
Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A local MCP server on localhost:8765 can function as a shared message bus so Cloud Code and Cursor coordinate bidirectionally.
Briefing
Parallel coding between two AI coding clients becomes practical when a middle layer coordinates shared context. The core result here is a working setup where Cloud Code and Cursor collaborate on the same local project at the same time, using an MCP server as a message bus so each side can request updates, exchange status, and keep the other aligned on what’s been implemented.
The workflow starts with an MCP server running on localhost:8765. Both clients connect to the same MCP “sync bridge,” enabling bidirectional communication. A simple handshake confirms connectivity: Cursor signals that it’s ready to work, Cloud Code checks for incoming messages, and the two begin exchanging prompts like “Hello client 2” and “Let’s build a simple app.” Early tests show the communication loop functions—messages are received and responses trigger further work—though the creator notes the process isn’t yet effective for structured project planning when done purely through ad-hoc back-and-forth.
To make parallel work more coherent, a planning step is introduced using Gemini 2.5 Pro. A prompt asks Gemini to generate a step-by-step plan for two roles: one client handles the front end and the other handles the back end. The target project is a local photo upload app that stores images in a local directory named images and runs entirely on the developer machine. The plan is then distributed so each client can start implementing its assigned portion.
Once both clients run in full auto mode, the project scaffolding appears quickly: a back end directory with server.js and a front end directory with the UI. Communication continues through the MCP bridge, including checks for messages and updates about the API contract. The pace makes it hard for a human to follow in real time, but the system keeps iterating autonomously—front end and back end tasks progress while each side waits for the other’s confirmations.
The final test validates the collaboration end-to-end. The back end server runs on localhost:3000, the front end loads an index page with a photo upload interface, and selecting an image produces a preview and completes an upload successfully. The uploaded file appears in the expected local images directory, confirming that the back-end implementation and front-end integration landed correctly.
The remaining gap is coordination granularity. The next goal is to move from continuous autonomous updates to staged milestones: for example, have the back-end client produce three alternative implementations, then pause; have the front-end client produce three alternatives; then synchronize with the server and share progress so both sides know exactly where they are in the plan. The initial experiment is framed as an early, promising proof that parallel coding with two LLM-driven clients can work when a shared MCP communication layer and a structured plan are in place.
Cornell Notes
A localhost MCP server (port 8765) acts as a shared message bridge between Cloud Code and Cursor, letting both clients coordinate while coding the same local project. After a basic handshake proves messages flow both ways, Gemini 2.5 Pro generates a step-by-step plan assigning front-end and back-end responsibilities for a photo upload app. Both clients then run in full auto mode, exchanging status updates through the MCP tool and aligning on the API contract. The collaboration succeeds end-to-end: the front end uploads an image, and the back end stores it in the local images directory. The next improvement is milestone-based synchronization so each side completes a set of implementations before pausing and sharing progress.
How does the MCP server enable two AI coding clients to work on the same codebase simultaneously?
Why does the workflow switch from manual message exchange to using Gemini 2.5 Pro planning?
What concrete signals show the front end and back end stayed aligned during autonomous execution?
How was the finished system validated?
What coordination problem remains, and what milestone-based approach is proposed?
Review Questions
- What role does the MCP sync bridge play in keeping Cloud Code and Cursor coordinated, and what evidence suggests messages are successfully exchanged?
- How did Gemini 2.5 Pro’s plan structure reduce drift between front-end and back-end tasks compared with manual prompting?
- What specific end-to-end test confirmed that the photo upload app worked, and where did the uploaded file end up?
Key Points
- 1
A local MCP server on localhost:8765 can function as a shared message bus so Cloud Code and Cursor coordinate bidirectionally.
- 2
A basic readiness/handshake exchange confirms the communication loop works before attempting full project collaboration.
- 3
Gemini 2.5 Pro planning assigns clear front-end vs back-end responsibilities and defines concrete project requirements (photo upload, local images directory).
- 4
Running both clients in full auto mode enables rapid scaffolding and iterative integration while they exchange status updates through the MCP bridge.
- 5
End-to-end validation succeeded: the front end uploaded an image, and the back end stored it in the images directory.
- 6
The next development target is milestone-based synchronization (e.g., three implementations per side, then pause and share progress) to improve control and reduce drift.