How to Build Local MCP Servers | MCP Trilogy | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Build a local MCP server incrementally: validate the MCP tool/run/integrate loop with a simple demo server before swapping in business logic.
Briefing
Local MCP servers are the practical on-ramp to building a useful “chat-to-database” workflow: write expenses in natural language from Claude Desktop, have an MCP server parse it, store it in a local SQLite database, and answer follow-up questions like totals by date range or category. The walkthrough turns that idea into a step-by-step build, starting with a deliberately simple demo server and then evolving it into an expense tracker with three core tools—add expense, list expenses, and summarize spending.
The session begins by placing the project inside a larger MCP trilogy: earlier parts covered why MCP matters and how MCP architecture and lifecycle work, while this installment focuses on local servers only. The next installment will move the same server to a remote host, and a later part will cover building MCP clients. To keep the learning curve manageable, the build targets an “intermediate” server that’s still useful—an expense tracker that can be managed through chat.
A live demo shows the end-state behavior. From Claude Desktop, the user can type commands like “Add 500 travel expense for cab ride yesterday.” The server interprets amount, category, and date, inserts a transaction into the database, and then supports natural-language retrieval such as “Show all expenses from September,” automatically deriving a start and end date and returning a table with totals. More complex questions work too: “Summarize total spend on education in the last 10 days,” or “What was my total expense on education last week,” with category filtering and computed totals.
After the demo, the build plan is laid out as incremental iterations. First comes a basic MCP server (a calculator-style example with “roll dice” and “add numbers”) to learn installation, running, and integration. The tutorial then introduces the practical tooling choices: MCP is a protocol, but writing everything from scratch is complex and redundant, so developers rely on libraries. Confusion around “mcp SDK” versus “fast mcp” is addressed through a timeline: Anthropic’s MCP SDK arrived first (with server/client/CLI components), fast mcp abstracted away boilerplate to make server creation beginner-friendly, and later MCP evolved toward MCP 2.0 as an independent library. The takeaway is operational: either install MCP CLI (to get MCP SDK) or install fast mcp (to use MCP 2.0), and the code patterns remain largely aligned.
The local build uses uv for faster Python package management, initializes a project folder, installs fast mcp, and writes server code in main.py. Tools are created by defining Python functions and decorating them as MCP tools. The server is tested with MCP Inspector, which verifies transport type (stdio for local setups) and shows JSON-RPC messages for tool calls.
Integration with Claude Desktop is handled by installing the server via an uv run command. A common failure mode—Claude Desktop not connecting because the uv path is not absolute—is resolved by replacing “uv” with its full path in the install command. Once connected, the demo server is replaced with the expense tracker.
The expense tracker stores transactions in a local SQLite database (expenses.db). It creates an expenses table with fields for id, date, amount, category, subcategory, and note. The first version implements two tools: add expense and list all expenses. Then list is upgraded to accept start_date and end_date parameters, enabling “last week” and “last month” style queries.
A third tool, summarize, computes totals by category within a date range, using SQL aggregation (SUM) and optional category filtering. Finally, data consistency is improved by adding an MCP resource backed by a JSON file of allowed categories and subcategories, so the model selects from a controlled vocabulary rather than inventing variations like “education” vs “upskilling.” The result is a chat-driven expense system that feels far less tedious than manual forms.
The session closes by connecting MCP to fastapi: fast mcp is designed to be compatible with fastapi, enabling a company to convert an existing fastapi backend into an MCP server with minimal extra code. A demo shows wrapping an existing API app into an MCP server so the same endpoints become chat tools, reducing development time and enabling multi-platform access (web, mobile, and chat clients).
Cornell Notes
The walkthrough builds a local MCP server that turns natural-language expense entries into structured database records and then answers spending questions. It starts with a tiny “demo server” (dice roll and add numbers) to learn installation, running, MCP Inspector debugging, and Claude Desktop integration over stdio. Then it evolves the server into an expense tracker using fast mcp and a local SQLite database, implementing tools to add expenses, list expenses within a date range, and summarize totals by category. To keep analytics reliable, it adds a categories/subcategories JSON resource so entries use a consistent schema rather than free-form labels. The final section shows how fast mcp can wrap an existing fastapi app into an MCP server, cutting development effort for multi-platform products.
Why does the tutorial start with a calculator-style MCP server before building the expense tracker?
What are the expense tracker’s core tools, and what does each one do?
How does the server store data locally, and what schema does it create?
How does the tutorial improve reliability of categories and subcategories over time?
What’s the purpose of MCP Inspector in this workflow?
How does fast mcp relate to fastapi in the tutorial’s business-oriented example?
Review Questions
- What steps in the tutorial are necessary to confirm an MCP server is working before integrating it with Claude Desktop?
- How does the summarize tool change its SQL behavior when a category is provided versus when it’s omitted?
- Why does constraining categories via a JSON resource matter for later expense analytics?
Key Points
- 1
Build a local MCP server incrementally: validate the MCP tool/run/integrate loop with a simple demo server before swapping in business logic.
- 2
Use fast mcp with uv to reduce boilerplate when creating MCP servers and tools.
- 3
Test MCP servers with MCP Inspector to verify stdio transport and inspect JSON-RPC tool calls and results.
- 4
Store transactions in a local SQLite database and initialize tables programmatically to keep the server self-contained.
- 5
Upgrade listing from “all rows” to date-range queries by adding start_date and end_date parameters and applying SQL WHERE filters.
- 6
Prevent category drift by constraining category/subcategory inputs using an MCP resource backed by a JSON vocabulary file.
- 7
Leverage fastapi compatibility: wrap an existing fastapi app into an MCP server to expose the same backend functionality to chat clients with less development effort.