Get AI summaries of any video or article — Sign up free
INSANE Parallel Coding with Claude Code + Cursor MCP Server thumbnail

INSANE Parallel Coding with Claude Code + Cursor MCP Server

All About AI·
5 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A local MCP server on localhost:8765 can function as a shared message bus so Cloud Code and Cursor coordinate bidirectionally.

Briefing

Parallel coding between two AI coding clients becomes practical when a middle layer coordinates shared context. The core result here is a working setup where Cloud Code and Cursor collaborate on the same local project at the same time, using an MCP server as a message bus so each side can request updates, exchange status, and keep the other aligned on what’s been implemented.

The workflow starts with an MCP server running on localhost:8765. Both clients connect to the same MCP “sync bridge,” enabling bidirectional communication. A simple handshake confirms connectivity: Cursor signals that it’s ready to work, Cloud Code checks for incoming messages, and the two begin exchanging prompts like “Hello client 2” and “Let’s build a simple app.” Early tests show the communication loop functions—messages are received and responses trigger further work—though the creator notes the process isn’t yet effective for structured project planning when done purely through ad-hoc back-and-forth.

To make parallel work more coherent, a planning step is introduced using Gemini 2.5 Pro. A prompt asks Gemini to generate a step-by-step plan for two roles: one client handles the front end and the other handles the back end. The target project is a local photo upload app that stores images in a local directory named images and runs entirely on the developer machine. The plan is then distributed so each client can start implementing its assigned portion.

Once both clients run in full auto mode, the project scaffolding appears quickly: a back end directory with server.js and a front end directory with the UI. Communication continues through the MCP bridge, including checks for messages and updates about the API contract. The pace makes it hard for a human to follow in real time, but the system keeps iterating autonomously—front end and back end tasks progress while each side waits for the other’s confirmations.

The final test validates the collaboration end-to-end. The back end server runs on localhost:3000, the front end loads an index page with a photo upload interface, and selecting an image produces a preview and completes an upload successfully. The uploaded file appears in the expected local images directory, confirming that the back-end implementation and front-end integration landed correctly.

The remaining gap is coordination granularity. The next goal is to move from continuous autonomous updates to staged milestones: for example, have the back-end client produce three alternative implementations, then pause; have the front-end client produce three alternatives; then synchronize with the server and share progress so both sides know exactly where they are in the plan. The initial experiment is framed as an early, promising proof that parallel coding with two LLM-driven clients can work when a shared MCP communication layer and a structured plan are in place.

Cornell Notes

A localhost MCP server (port 8765) acts as a shared message bridge between Cloud Code and Cursor, letting both clients coordinate while coding the same local project. After a basic handshake proves messages flow both ways, Gemini 2.5 Pro generates a step-by-step plan assigning front-end and back-end responsibilities for a photo upload app. Both clients then run in full auto mode, exchanging status updates through the MCP tool and aligning on the API contract. The collaboration succeeds end-to-end: the front end uploads an image, and the back end stores it in the local images directory. The next improvement is milestone-based synchronization so each side completes a set of implementations before pausing and sharing progress.

How does the MCP server enable two AI coding clients to work on the same codebase simultaneously?

An MCP server runs locally on localhost:8765 and both clients connect to the same MCP sync bridge. That shared bridge lets each client send and receive messages—one side can signal readiness, the other can check for incoming messages, and both can exchange updates like “Hello client 2” or requests such as “Let’s build a simple app.” This turns coordination into a continuous message loop rather than isolated, one-client coding.

Why does the workflow switch from manual message exchange to using Gemini 2.5 Pro planning?

Ad-hoc communication works for proving connectivity, but it’s hard to plan a full project through constant back-and-forth. Gemini 2.5 Pro is used to produce a step-by-step plan that assigns roles: one client builds the front end and the other builds the back end. The plan also defines concrete requirements (a local photo upload app that stores images in a local directory named images), giving both clients a shared target and sequence.

What concrete signals show the front end and back end stayed aligned during autonomous execution?

During full auto runs, the system repeatedly checks the MCP sync server for messages and responds with updates. A key alignment moment is when the back end provides an API contract and the front end confirms integration by testing the integration and updating accordingly. The ongoing message checks and contract confirmation are the practical indicators that the two sides weren’t drifting.

How was the finished system validated?

The back end server runs on localhost:3000, and the front end loads an index page with a photo upload UI. Uploading a file produces a preview and completes successfully. Finally, the uploaded image is found in the expected local images directory, confirming both the UI-to-API wiring and the server-side storage behavior.

What coordination problem remains, and what milestone-based approach is proposed?

Continuous autonomous coding makes it difficult to track progress and can blur where each client is in the plan. The proposed fix is staged synchronization: have the back-end client produce three implementations, then wait; have the front-end client produce three implementations, then wait; then share information so both clients know the current path before continuing. This aims to make parallel work more controllable and less chaotic.

Review Questions

  1. What role does the MCP sync bridge play in keeping Cloud Code and Cursor coordinated, and what evidence suggests messages are successfully exchanged?
  2. How did Gemini 2.5 Pro’s plan structure reduce drift between front-end and back-end tasks compared with manual prompting?
  3. What specific end-to-end test confirmed that the photo upload app worked, and where did the uploaded file end up?

Key Points

  1. 1

    A local MCP server on localhost:8765 can function as a shared message bus so Cloud Code and Cursor coordinate bidirectionally.

  2. 2

    A basic readiness/handshake exchange confirms the communication loop works before attempting full project collaboration.

  3. 3

    Gemini 2.5 Pro planning assigns clear front-end vs back-end responsibilities and defines concrete project requirements (photo upload, local images directory).

  4. 4

    Running both clients in full auto mode enables rapid scaffolding and iterative integration while they exchange status updates through the MCP bridge.

  5. 5

    End-to-end validation succeeded: the front end uploaded an image, and the back end stored it in the images directory.

  6. 6

    The next development target is milestone-based synchronization (e.g., three implementations per side, then pause and share progress) to improve control and reduce drift.

Highlights

Cloud Code and Cursor can collaborate on the same local project when both connect to the same MCP sync bridge and exchange messages for readiness and progress.
Gemini 2.5 Pro’s step-by-step plan turns parallel coding from a chat-like experiment into a role-based build process with shared goals.
The photo upload app worked end-to-end: localhost:3000 served the back end, the UI previewed and uploaded an image, and the file appeared in the local images directory.

Topics

Mentioned

  • MCP