Tutorial 3- Live Build Agentic AI Chatbot Using LangGraph
Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Attach multiple external tools to a chat model by binding a tools list, so the model can emit structured tool-call requests.
Briefing
A LangGraph “react” workflow can turn a chat model into a tool-using agent that decides—turn by turn—whether to answer directly or call external systems like arXiv search, Wikipedia, and web search. The practical takeaway is that the model’s output can include structured “tool calls,” and LangGraph routes execution accordingly, letting multi-step questions trigger multiple tool invocations before producing a final response.
The session starts by laying out the chatbot’s architecture: an AI assistant node (the “brain”) connected to a tool node that bundles multiple tools. The reasoning/acting pattern—often called “ReAct” (reasoning + acting)—is used to justify the control flow. When a user asks something the model can’t reliably answer from its own context (e.g., “recent AI news” or “latest research paper on quantum computing”), the assistant issues a tool call. After the tool returns results, the assistant incorporates those outputs and produces the final message.
To make this concrete, three tools are assembled using LangChain tool wrappers: 1) an arXiv/RFIF-style research paper search tool (via an API wrapper and a query runner), 2) a Wikipedia search tool (configured with top-k results and a maximum character limit), and 3) a Tavily-based web search tool (configured through a Tavily API key).
The model is then initialized with Grok via LangChain’s ChatGrok integration. Tools are attached to the model using a binding step (the model is “bound” to the tool list), so the model can emit tool-call instructions instead of plain text. Early tests show the difference clearly: for a query requiring current information, the model returns a tool call (including the tool name and JSON arguments), rather than a direct answer. For simpler questions (like “What is AI?”), the model may answer without tool usage.
Next comes the LangGraph implementation. A state schema is defined to carry conversation messages across nodes. The key mechanism is an “add messages” reducer, which appends new human/AI/tool messages rather than overwriting prior context. The graph itself has two main nodes: an LLM tool-calling node and a prebuilt tool node. Edges connect the start state to the LLM node, then use a conditional “tool condition” to decide whether to route to the tool node or end the workflow. After tool execution, the graph returns to the assistant logic as needed, enabling multi-tool sequences.
The session demonstrates the control flow with example prompts: a single-sentence request triggers one tool call (e.g., arXiv for a research paper lookup), while a compound prompt like “recent AI news” followed by “recent research paper on quantum computing” can cause the agent to call different tools in sequence—first Tavily for news, then arXiv for research papers—before responding. The session closes by positioning MCP (Model Context Protocol) as the next step to replace or extend tool integration patterns, while emphasizing that Python knowledge and framework familiarity are key prerequisites.
Cornell Notes
The core idea is to build a LangGraph “ReAct” chatbot that can either answer directly or call external tools based on the user’s input. Tools (arXiv/RFIF-style search, Wikipedia search, and Tavily web search) are wrapped and then bound to a Grok chat model so the model can emit structured tool-call requests. LangGraph manages execution with a state that appends messages via an “add messages” reducer, and routing logic that checks whether the latest assistant output is a tool call. Conditional edges send the workflow to a tool node when needed, enabling multi-step questions to trigger multiple tool calls before the final response.
How does the agent decide when to call a tool instead of answering directly?
What role does the state schema and “add messages” reducer play in LangGraph?
Why bind tools to the LLM, and what does the model output look like when tools are required?
How are multiple tools combined into a single workflow?
How does the workflow support multi-step questions that require different tools?
Review Questions
- When the assistant output contains a tool call, what mechanism in LangGraph routes execution to the tool node?
- What problem does the “add messages” reducer solve compared with overwriting the messages state?
- If a user asks a question that doesn’t require external data, what path does the graph take (tool node vs end)?
Key Points
- 1
Attach multiple external tools to a chat model by binding a tools list, so the model can emit structured tool-call requests.
- 2
Use LangGraph conditional routing (“tool condition”) to decide whether to execute tools or end the workflow based on the latest assistant output.
- 3
Represent conversation and tool results in a shared state using a messages list that appends via an “add messages” reducer.
- 4
Bundle all tool providers into a single LangGraph tool node so the workflow can execute whichever tool the model requests.
- 5
Configure each tool with practical limits (e.g., top-k results and maximum returned characters) to keep outputs usable for the model.
- 6
For compound prompts, design the graph so it can loop through reasoning/acting steps, enabling sequential tool calls (e.g., Tavily then arXiv).