Get AI summaries of any video or article — Sign up free
Claude's Model Context Protocol is here... Let's test it thumbnail

Claude's Model Context Protocol is here... Let's test it

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

mCP standardizes how AI clients connect to external systems by exposing two server primitives: resources (read-only context) and tools (state-changing actions).

Briefing

Model Context Protocol (mCP) is positioning itself as a plug-and-play standard for giving AI assistants reliable access to external data and actions—turning “chat” into something closer to an API-driven workflow. Built by Anthropic (the team behind Claude), mCP defines a common way for an AI client (like Claude Desktop) to discover what a server can provide (resources) and what it can do (tools). The practical payoff is straightforward: instead of custom integrations for every model and every app, developers can expose capabilities through a shared protocol so LLMs can fetch context and trigger server-side operations with fewer brittle glue layers.

The tutorial demonstrates that workflow end-to-end using a storage bucket, a PostgreSQL database, and an existing REST API hosted on Savola (sponsored). The setup mirrors a typical production stack: user-uploaded images live in object storage, profile and relationship data sit in PostgreSQL, and a TypeScript REST service already handles application logic. The mCP server then becomes the bridge that lets Claude pull the right data for prompts and call actions that mutate real application state—such as creating matches or scheduling dates—rather than merely generating text.

At the core of the implementation are two server-defined concepts. “Resources” are read-only data fetches (for example, a database query that returns candidate horses and their relationship status). “Tools” are actions with side effects (for example, writing to the database or invoking an endpoint that creates matches). The code uses Zod schema validation to constrain the shape of inputs and outputs so the model can’t “hallucinate” arbitrary arguments. That matters because mCP requires the model to decide what parameters to pass; typed schemas and descriptions make those decisions more deterministic and reduce malformed calls.

Once the server is defined, the tutorial shows how to run it locally via standard IO for testing, or swap transport mechanisms for deployment (including server-sent events or HTTP). On the client side, Claude Desktop reads a configuration file listing one or more mCP servers and the command needed to start them. After attaching the server, Claude can fetch resources as context—such as querying which horses are single—and then use tools to perform actions like updating the database, with permissions gating write operations.

The broader claim behind the hype is that mCP makes LLM applications more reliable and interoperable, effectively creating “APIs for APIs” so different models and clients can plug into the same capability layer. The transcript also flags the stakes: Anthropic leadership predicts rapid AI-driven coding adoption, but the same automation could introduce serious failure modes if agents mishandle permissions or data. Still, the immediate takeaway is concrete: with mCP, developers can expose their existing data and services through a standardized interface that lets Claude act on real systems with structured, validated inputs.

Cornell Notes

Model Context Protocol (mCP) is a standard for connecting AI clients (like Claude Desktop) to external systems through a server that exposes two things: resources (read-only data fetches) and tools (actions that can change state). The tutorial builds an mCP server backed by a storage bucket for images, a PostgreSQL database for profile/relationship data, and an existing REST API for side-effect operations. Zod schema validation is used to enforce the expected input/output shapes so the model is less likely to send incorrect arguments. Once configured in the client, Claude can fetch database-backed context and—after permission—call tools to write updates, making LLM apps more interoperable and less custom-integration heavy.

What are “resources” and “tools” in mCP, and why does that distinction matter?

Resources are for fetching information with no side effects—such as querying PostgreSQL for horse profiles and relationship status. Tools represent actions that can cause changes—such as creating matches or scheduling dates by calling a REST endpoint or writing to the database. This separation helps keep read operations safe and makes it clearer when the model is allowed to perform state-changing operations.

How does schema validation (Zod) improve reliability in an mCP server?

mCP requires the model to choose arguments for server functions. Zod enforces a specific data shape (types and structure) for those arguments, so Claude is less likely to hallucinate random parameters. The server can reject or prevent malformed calls because the inputs must match the validated schema.

How does the tutorial connect existing app infrastructure to Claude using mCP?

It reuses a typical stack: a storage bucket holds user-uploaded images, PostgreSQL stores profile and relationship data, and a TypeScript REST API already implements side-effect logic. The mCP server wraps these capabilities: a resource fetches data from PostgreSQL, and a tool triggers the REST logic to mutate application state.

What does the client configuration step do in practice?

Claude Desktop reads a configuration file where developers list one or more mCP servers and the command to run them (e.g., the command that starts the server’s main entry point). After restarting Claude, the client can attach to the running mCP server so Claude can request resources as context and invoke tools when permitted.

Why is transport choice mentioned (standard IO vs HTTP/SSE) and what changes between local testing and deployment?

For local testing, the tutorial uses standard IO as the transport layer to connect the client and server. For cloud deployment, it notes that other transports like server-sent events or HTTP can be used, which changes how messages flow between the client and server but keeps the mCP resource/tool interface consistent.

What does it mean that mCP can act like an “API for APIs”?

Many mCP servers effectively wrap existing APIs behind a standardized protocol. That reduces one-off integrations because different LLM clients can connect to the same mCP server interface, and the model can discover available capabilities (resources/tools) in a consistent way.

Review Questions

  1. In an mCP server, what would you implement as a resource versus a tool, and what risk does each category help manage?
  2. How does Zod schema validation reduce failure modes when Claude decides what arguments to pass to server functions?
  3. Describe the steps needed to make an mCP server usable from Claude Desktop, from running the server to attaching it in the client configuration.

Key Points

  1. 1

    mCP standardizes how AI clients connect to external systems by exposing two server primitives: resources (read-only context) and tools (state-changing actions).

  2. 2

    Anthropic’s mCP is designed to reduce brittle, model-specific integrations by providing a shared protocol layer for LLM apps.

  3. 3

    Zod schema validation helps prevent incorrect or hallucinated function arguments by enforcing expected input/output shapes for mCP calls.

  4. 4

    An mCP server can wrap existing infrastructure—object storage, PostgreSQL, and a TypeScript REST API—so Claude can both fetch data and trigger updates.

  5. 5

    Client-side support (e.g., Claude Desktop) is required; configuration lists the mCP server(s) and the command to start them.

  6. 6

    Transport layers differ between local testing (standard IO) and deployment (HTTP/SSE), while the resource/tool interface remains the same.

  7. 7

    Permission gating is essential when tools can mutate data, since LLM-driven automation can otherwise create serious data-loss or integrity risks.

Highlights

mCP turns “context” into a structured, server-backed capability: Claude can query real PostgreSQL-backed data as prompt context instead of relying on text-only guessing.
The tutorial’s reliability move is Zod validation—typed schemas constrain what the model is allowed to send to server functions.
The client workflow is practical: add an mCP server command in Claude Desktop config, restart, and then attach so Claude can fetch resources and call tools.
mCP can wrap existing services, effectively functioning as an “API for APIs” that improves interoperability across models and clients.

Topics

Mentioned