Claude's Model Context Protocol is here... Let's test it
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
mCP standardizes how AI clients connect to external systems by exposing two server primitives: resources (read-only context) and tools (state-changing actions).
Briefing
Model Context Protocol (mCP) is positioning itself as a plug-and-play standard for giving AI assistants reliable access to external data and actions—turning “chat” into something closer to an API-driven workflow. Built by Anthropic (the team behind Claude), mCP defines a common way for an AI client (like Claude Desktop) to discover what a server can provide (resources) and what it can do (tools). The practical payoff is straightforward: instead of custom integrations for every model and every app, developers can expose capabilities through a shared protocol so LLMs can fetch context and trigger server-side operations with fewer brittle glue layers.
The tutorial demonstrates that workflow end-to-end using a storage bucket, a PostgreSQL database, and an existing REST API hosted on Savola (sponsored). The setup mirrors a typical production stack: user-uploaded images live in object storage, profile and relationship data sit in PostgreSQL, and a TypeScript REST service already handles application logic. The mCP server then becomes the bridge that lets Claude pull the right data for prompts and call actions that mutate real application state—such as creating matches or scheduling dates—rather than merely generating text.
At the core of the implementation are two server-defined concepts. “Resources” are read-only data fetches (for example, a database query that returns candidate horses and their relationship status). “Tools” are actions with side effects (for example, writing to the database or invoking an endpoint that creates matches). The code uses Zod schema validation to constrain the shape of inputs and outputs so the model can’t “hallucinate” arbitrary arguments. That matters because mCP requires the model to decide what parameters to pass; typed schemas and descriptions make those decisions more deterministic and reduce malformed calls.
Once the server is defined, the tutorial shows how to run it locally via standard IO for testing, or swap transport mechanisms for deployment (including server-sent events or HTTP). On the client side, Claude Desktop reads a configuration file listing one or more mCP servers and the command needed to start them. After attaching the server, Claude can fetch resources as context—such as querying which horses are single—and then use tools to perform actions like updating the database, with permissions gating write operations.
The broader claim behind the hype is that mCP makes LLM applications more reliable and interoperable, effectively creating “APIs for APIs” so different models and clients can plug into the same capability layer. The transcript also flags the stakes: Anthropic leadership predicts rapid AI-driven coding adoption, but the same automation could introduce serious failure modes if agents mishandle permissions or data. Still, the immediate takeaway is concrete: with mCP, developers can expose their existing data and services through a standardized interface that lets Claude act on real systems with structured, validated inputs.
Cornell Notes
Model Context Protocol (mCP) is a standard for connecting AI clients (like Claude Desktop) to external systems through a server that exposes two things: resources (read-only data fetches) and tools (actions that can change state). The tutorial builds an mCP server backed by a storage bucket for images, a PostgreSQL database for profile/relationship data, and an existing REST API for side-effect operations. Zod schema validation is used to enforce the expected input/output shapes so the model is less likely to send incorrect arguments. Once configured in the client, Claude can fetch database-backed context and—after permission—call tools to write updates, making LLM apps more interoperable and less custom-integration heavy.
What are “resources” and “tools” in mCP, and why does that distinction matter?
How does schema validation (Zod) improve reliability in an mCP server?
How does the tutorial connect existing app infrastructure to Claude using mCP?
What does the client configuration step do in practice?
Why is transport choice mentioned (standard IO vs HTTP/SSE) and what changes between local testing and deployment?
What does it mean that mCP can act like an “API for APIs”?
Review Questions
- In an mCP server, what would you implement as a resource versus a tool, and what risk does each category help manage?
- How does Zod schema validation reduce failure modes when Claude decides what arguments to pass to server functions?
- Describe the steps needed to make an mCP server usable from Claude Desktop, from running the server to attaching it in the client configuration.
Key Points
- 1
mCP standardizes how AI clients connect to external systems by exposing two server primitives: resources (read-only context) and tools (state-changing actions).
- 2
Anthropic’s mCP is designed to reduce brittle, model-specific integrations by providing a shared protocol layer for LLM apps.
- 3
Zod schema validation helps prevent incorrect or hallucinated function arguments by enforcing expected input/output shapes for mCP calls.
- 4
An mCP server can wrap existing infrastructure—object storage, PostgreSQL, and a TypeScript REST API—so Claude can both fetch data and trigger updates.
- 5
Client-side support (e.g., Claude Desktop) is required; configuration lists the mCP server(s) and the command to start them.
- 6
Transport layers differ between local testing (standard IO) and deployment (HTTP/SSE), while the resource/tool interface remains the same.
- 7
Permission gating is essential when tools can mutate data, since LLM-driven automation can otherwise create serious data-loss or integrity risks.