Get AI summaries of any video or article — Sign up free
How to Build & Deploy Remote MCP Servers | MCP Trilogy | CampusX thumbnail

How to Build & Deploy Remote MCP Servers | MCP Trilogy | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Remote MCP servers centralize tools on an internet-accessible machine, enabling multiple clients to share one server while accepting higher latency than local MCP.

Briefing

Remote MCP servers let teams run MCP tools from a different machine—often a more powerful server on the internet—so multiple clients can share the same capabilities. The trade-off is speed: local MCP stays fast because communication happens on one machine, while remote MCP typically runs over the network and can feel slower. Still, the practical upside is clear for real deployments: enterprise setups are expected to be remote, and remote servers make it possible to centralize compute and share one toolset across many users.

The walkthrough builds a remote MCP server end-to-end using the MCP library (from the earlier local setup) and then deploys it so others can use it. First comes a minimal “test remote MCP server” with basic tools: adding two numbers, generating a random number within a range, plus a simple resource that returns server information. The key technical change from local to remote is in the MCP run configuration: instead of binding to a local-only transport, the server uses Streamable HTTP and binds to 0.0.0.0 on a chosen port, making it reachable from outside the host machine. After starting the server, the setup is verified using the MCP Inspector by connecting over Streamable HTTP, listing resources, and running the random-number tool to confirm correct behavior.

Deployment happens through FastMCP Cloud, a free service at the time of the tutorial. The process is: create a GitHub repository, push the code, then use FastMCP Cloud’s “Deploy from your own code” flow to build and publish the server. Once deployed, the server’s URL can be copied and shared. On the client side, users open Cloud Desktop, connect via connectors, and—depending on plan—add a custom remote MCP connector using the provided URL. The tutorial demonstrates that with the deployed server, a remote client can call the tools (e.g., generate a random number between two bounds) and receive results.

The main goal then shifts from a toy server to a remote expense tracker. Rather than rebuilding everything, the expense-tracker MCP code from the prior local version is inserted into the same project, along with a required categories.json file. The updated code is tested in the MCP Inspector, pushed to GitHub, and redeployed so the new tools appear in Cloud Desktop. A deployment issue surfaces: the SQLite database ends up read-only on the server, preventing new expense inserts. The fix is to adjust the code to create a writable directory (using a suggested change) so the database can be updated after deployment.

Finally, two limitations are addressed. First, the expense tracker initially runs synchronously, which blocks concurrent users—so the code is updated to use async/await patterns and AIOSQLite instead of SQLite, enabling parallel handling of tool calls and database operations. Second, free-plan users can’t add custom connectors directly, so a workaround uses a local proxy MCP server that connects Cloud Desktop to the remote server indirectly. The tutorial also flags a deeper logical flaw: without authentication, any user can see every other user’s expenses because the database lacks user scoping and there’s no reliable way to verify who is calling. The next steps are framed around adding authentication and building a custom MCP client rather than relying solely on Cloud Desktop connectors.

Cornell Notes

Remote MCP servers run on a different machine (often internet-accessible), enabling multiple clients to share one centralized toolset—at the cost of higher latency versus local MCP. The tutorial first creates a minimal remote MCP server (add two numbers, generate random numbers) by switching the transport to Streamable HTTP and binding to 0.0.0.0, then verifies it with MCP Inspector. It deploys the server via FastMCP Cloud by pushing code to GitHub and publishing from that repository, producing a shareable URL. The expense tracker is then converted into a remote MCP server by swapping in the expense-tracker code and categories.json, fixing a read-only SQLite deployment issue by writing to a writable directory. The final improvements include async support using AIOSQLite to avoid blocking concurrent users and a proxy workaround for free-plan connector limitations, while authentication remains the major unresolved requirement for multi-user privacy.

What concrete change turns a local MCP server into a remote-accessible one in this setup?

The server’s run configuration changes the transport and network binding. Instead of the local-style MCP run, the remote version uses Streamable HTTP and sets the host to 0.0.0.0 (and a defined port). That combination makes the MCP endpoints reachable from other machines over the network, not just from the same host.

How is the remote server validated before deployment?

After starting the server, the setup uses MCP Inspector to connect using the Streamable HTTP transport. Once connected, it lists resources and tools, then runs the random-number tool with a min/max range to confirm the server responds correctly (e.g., returning a random value within the requested bounds).

Why did adding expenses fail after deploying the expense tracker, and what was the fix?

Expense insertion failed because the SQLite database ended up in read-only mode on the deployed server. The workaround is to modify the code so it creates/uses a writable directory at runtime (the tutorial applies a suggested code change that sets up a new directory), allowing the deployed server to write updates to the database.

What performance problem appears with the initial remote MCP server, and how is it addressed?

The initial implementation is synchronous/blocking: while one user calls a tool (like Add Expense), other users must wait because tool execution and database operations block the server. The fix converts tool functions and database operations to async/await and replaces SQLite with AIOSQLite, enabling concurrent handling of multiple users’ requests.

How do free-plan users connect to the remote MCP server if custom connectors aren’t available?

A proxy workaround is used. A local MCP proxy server runs on the user’s machine and connects to the remote MCP server. Cloud Desktop then connects to the local proxy (which is allowed on free plans), and the proxy forwards requests to the remote server, effectively bridging the gap without needing the custom connector UI.

What major security/privacy flaw remains, and why?

Without authentication, the system can’t reliably associate requests with a specific user. The database schema lacks a user identifier column, so when any user asks for “my” expenses, the server can only return the shared dataset—meaning every user may see everyone else’s expenses. The tutorial flags this as a faulty design for a remote multi-user service.

Review Questions

  1. What network and transport settings are required for the MCP server to accept remote requests, and why do they matter?
  2. How does switching from synchronous code to async/await plus AIOSQLite change the server’s ability to handle concurrent users?
  3. What two separate issues prevent a production-ready multi-user remote expense tracker in this tutorial, and how are they planned to be solved?

Key Points

  1. 1

    Remote MCP servers centralize tools on an internet-accessible machine, enabling multiple clients to share one server while accepting higher latency than local MCP.

  2. 2

    Switching to Streamable HTTP and binding to 0.0.0.0 is the core configuration step for making an MCP server reachable remotely.

  3. 3

    FastMCP Cloud deployment is driven by pushing code to GitHub and using “Deploy from your own code,” which produces a shareable server URL.

  4. 4

    A deployed SQLite database may become read-only; creating/using a writable directory in code is necessary for write operations like adding expenses.

  5. 5

    Synchronous MCP tool/database handling blocks concurrent users; converting to async/await and using AIOSQLite improves concurrency.

  6. 6

    Free-plan connector limits can be bypassed with a local proxy MCP server that forwards requests to the remote server.

  7. 7

    Without authentication and user scoping in the database, any user can potentially view all expenses—making auth a required next step.

Highlights

Remote access hinges on Streamable HTTP plus binding the server to 0.0.0.0, not just changing the code logic.
FastMCP Cloud turns a GitHub-backed MCP project into a production URL that can be shared and connected to from Cloud Desktop.
The expense tracker initially fails on deployment due to a read-only SQLite database, fixed by writing to a writable directory.
Async support (async/await + AIOSQLite) is used to remove blocking behavior so multiple users can call tools concurrently.
A local proxy MCP server provides a workaround for free-plan users who can’t add custom remote connectors directly.

Topics

  • Remote MCP Servers
  • FastMCP Cloud Deployment
  • MCP Inspector
  • Expense Tracker MCP
  • Async AIOSQLite

Mentioned