How to build MCP Client using LangGraph | Agentic AI using LangGraph | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Custom tool wrappers tied to external APIs break when upstream services change endpoints or response fields, creating repeated maintenance work across tools and chatbots.
Briefing
Agentic AI tool integrations get brittle fast when every chatbot hard-codes custom “tool” wrappers for each external service. MCP (Model Context Protocol) is presented as a cleaner, more maintainable way to connect LLM apps to tools by separating the heavy, service-specific logic on a server from the lightweight client configuration inside the LangGraph app—so API changes on the tool side don’t force repeated client rewrites.
The walkthrough starts with a practical pain point. A LangGraph chatbot already uses three tools: an internet search tool, a calculator tool, and a “get stock price” tool. When a manager asks for GitHub-backed question answering—like listing pull requests from GitHub repositories—the usual approach is to create a custom user-defined tool. That tool needs inputs such as repository owner, repository URL, pull request state (open/closed), and how many pull requests to return. It also requires GitHub authentication via a token and additional headers, then calls GitHub’s REST API, parses JSON, and formats results.
The fragility shows up when GitHub changes its API. A major version shift (from API 1.0 to API 2.0) can alter URL paths and response fields (for example, renaming attributes). With the “tool code inside the chatbot” approach, the chatbot breaks immediately and developers must update the tool wrapper. The maintenance burden multiplies: one API change can require edits across many tool implementations, and even more so across multiple chatbots and additional integrations like Gmail, Slack, or other services.
MCP is introduced as the fix for that maintenance problem. Instead of embedding service-specific code directly in each chatbot, MCP runs tool logic on an MCP server and exposes standardized tool definitions to clients. The LangGraph side holds only configuration needed to connect to the MCP server. When the server updates for upstream API changes, the client configuration remains stable, because the client doesn’t depend on GitHub’s evolving API details.
After the conceptual case, the coding section demonstrates building an MCP client inside LangGraph. The existing LangGraph chatbot is first converted from synchronous execution to asynchronous execution because the MCP client library used for integration works in async mode. The original calculator tool node is then replaced with an MCP client that connects to a locally running MCP math server (built with a library such as “langchain mcp servers”). The client fetches the available tools from the server—addition, subtraction, multiplication, division, power, and modulus—then binds those tools to the LangGraph LLM.
A second demo adds a remote MCP server for expense tracking (deployed over HTTP with a “streamable HTTP” transport). The same LangGraph chatbot can now list, add, and summarize expenses by simply extending the MCP client configuration with the remote server URL and transport settings—no new tool code inside the chatbot.
Finally, the lesson expands into a mixed architecture: the chatbot can use both traditional LangGraph tools and MCP-based tools together. A larger Streamlit + LangGraph + SQLite setup is adapted to support MCP clients, with additional async-compatible components (like an async SQLite layer) and an async streaming function. The result is a chatbot that can answer questions using both web/search tools and MCP-served capabilities, with a strong emphasis on future-proofing and reducing integration churn.
Cornell Notes
MCP (Model Context Protocol) is positioned as a standardized way to connect LLM apps to external tools without hard-coding brittle service-specific wrappers inside each chatbot. The transcript contrasts a custom “tool per integration” approach—where GitHub API changes can break the chatbot—with MCP’s separation of concerns: tool logic runs on an MCP server, while the LangGraph app keeps only lightweight client configuration. In the implementation, the LangGraph chatbot is converted to async because the MCP client library requires async execution. The calculator tool is replaced by an MCP client that fetches tool definitions from a local MCP math server, then binds those tools to the LLM. The same pattern extends to remote MCP servers (e.g., an expense tracker over HTTP), enabling new capabilities by configuration rather than rewriting chatbot tool code.
Why do custom “tool wrappers” become a maintenance problem as integrations grow?
What is the core MCP idea that prevents client-side breakage?
Why does the LangGraph code need to be converted to async before adding an MCP client?
How does the MCP client in LangGraph get the tools it can call?
What changes when adding a second MCP server (local vs remote)?
Can a chatbot mix traditional LangGraph tools with MCP tools?
Review Questions
- What specific failure mode occurs when an external service’s API changes under the “custom tool wrapper inside the chatbot” approach?
- How does MCP’s separation of concerns reduce the number of code locations that must be updated after upstream changes?
- What async requirement affects LangGraph when integrating an MCP client, and where does that requirement show up in the code structure?
Key Points
- 1
Custom tool wrappers tied to external APIs break when upstream services change endpoints or response fields, creating repeated maintenance work across tools and chatbots.
- 2
MCP reduces brittleness by moving service-specific tool logic to an MCP server and keeping the LangGraph app as a lightweight MCP client with stable configuration.
- 3
The LangGraph app must be async-compatible because the MCP client library used for integration operates only in async mode.
- 4
In LangGraph, the MCP client can fetch tool names and tool definitions from an MCP server, then bind those tools to the LLM for tool calling.
- 5
Adding additional MCP servers (including remote ones) typically requires only updating the MCP client configuration (URL and transport), not rewriting chatbot tool code.
- 6
A single chatbot can combine traditional LangGraph tools with MCP-provided tools, enabling incremental adoption of MCP for more future-proof integrations.