Build AI Assistant With MCP Servers And Tools Using LangChain And Groq
Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
MCP standardizes tool access for LLMs by connecting an MCP host to provider-managed MCP servers through an MCP client and configuration.
Briefing
Model Context Protocol (MCP) is positioned as a way to connect large language models to third-party capabilities—like browser automation and hotel search—through a standard interface, so developers don’t have to write custom integration code for every service. The practical payoff: once an MCP host (such as Cursor) is configured with the right server endpoints, the host can route requests to multiple MCP servers managed by service providers, and those providers handle updates on their side.
The tutorial starts by reframing MCP’s core architecture. An MCP host (the AI-enabled IDE) creates an MCP client internally. That client reads configuration describing which MCP servers it can talk to, then uses the MCP protocol to exchange context and tool calls. The LLM remains the reasoning layer: without an LLM consuming the retrieved context, outputs won’t be useful. The key operational idea is that the “services” (the MCP servers) run externally and are maintained by the providers, while the developer mainly manages configuration and the application logic that orchestrates an agent.
After setting the stage, the walkthrough shifts into implementation from scratch using Python tooling and LangChain. The environment is created with uv (a fast Python package manager), and the project installs LangChain with Groq support (and optionally OpenAI). A dedicated MCP client library is introduced—an open-source unified MCP client library from Projolo—that enables connecting any LLM to MCP servers and building custom agents with tool access without relying on closed-source clients.
The build then becomes concrete: a Cursor-based MCP host is installed and used as the development environment. A project is initialized (uv init), a virtual environment is created (uv venv), and dependencies are added (LangChain Groq and MCP-related packages). An app.py is created to run a chatbot agent that loads an MCP server configuration JSON file (for example, browser_mcp.json). That JSON file specifies how to launch MCP servers—often via npx commands—and includes required credentials such as a Groq API key.
The tutorial demonstrates multi-server integration by stacking configurations for different providers. A Playwright MCP server (from Microsoft) provides browser automation so the agent can open sites and navigate pages using structured accessibility snapshots. Additional MCP servers are added for services like Airbnb hotel search and DuckDuckGo web search. With these configured, the agent can route tasks to the appropriate tool: Playwright handles “open google.com then navigate,” while Airbnb handles “find hotels in New York for specific dates,” and DuckDuckGo handles “top AI news.”
Finally, the host-side experience is shown in Cursor: after updating Cursor’s MCP settings with the server list, the IDE can execute tool calls directly from chat. The same configuration is then used in app.py via an MCP client that reads the JSON, initializes the LLM (via Groq), enables conversation memory, and runs an interactive loop. The result is a working chatbot that can combine web navigation, web search, and travel listings through MCP—without building each integration from scratch.
Cornell Notes
MCP standardizes how an AI host (like Cursor) connects an LLM to external tool services (MCP servers) such as browser automation and web search. The host runs an MCP client that reads a JSON configuration listing which servers to connect to; those servers are maintained by their providers. Using LangChain with Groq, the tutorial builds an agent that loads the MCP configuration, initializes an LLM, and routes user requests to the right tools. The same MCP setup works both inside the IDE (Cursor chat tool execution) and in a standalone Python app (app.py). This matters because it reduces custom integration work: adding or updating capabilities is mostly configuration-driven rather than code-heavy.
What problem MCP is meant to solve in an LLM application, and what stays developer-managed vs provider-managed?
How does the MCP host decide which tools (servers) it can use?
Why is Playwright used in this setup, and what does it enable the agent to do?
How does the agent handle different tasks using different MCP servers?
What role do LangChain and Groq play in the chatbot implementation?
How is the same MCP configuration used both in Cursor and in a standalone Python app?
Review Questions
- What components make up the MCP connection chain from an IDE to external tools, and where does configuration fit in?
- Describe how Playwright MCP differs from Airbnb MCP and DuckDuckGo MCP in terms of the tasks each tool supports.
- In the LangChain + Groq agent setup, what is the purpose of the MCP client and how does it interact with the LLM during a conversation?
Key Points
- 1
MCP standardizes tool access for LLMs by connecting an MCP host to provider-managed MCP servers through an MCP client and configuration.
- 2
An MCP host (e.g., Cursor) creates an MCP client internally; the client reads a JSON configuration to learn which servers to connect to.
- 3
Provider-managed MCP servers handle service-specific capabilities and updates, reducing the need for developers to rewrite integrations.
- 4
Using LangChain with Groq, an MCP-enabled agent can route user requests to the correct tool—browser automation via Playwright, travel search via Airbnb, and web search via DuckDuckGo.
- 5
Playwright MCP enables browser navigation by interacting with web pages and returning structured accessibility snapshots to the LLM.
- 6
Adding new capabilities is largely a matter of updating the MCP server configuration and ensuring the host can launch the specified servers (often via npx).