Get AI summaries of any video or article — Sign up free
LangGraph Crash Course with code examples thumbnail

LangGraph Crash Course with code examples

Sam Witteveen·
6 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

LangGraph orchestrates LLM agents as a graph-based state machine with dynamic routing rather than a fixed linear chain.

Briefing

LangGraph is positioned as a new way to run LangChain-based LLM agents by modeling agent behavior as a graph-driven state machine rather than a fixed chain. Instead of a directed acyclic graph, the structure allows nodes to make decisions about which node to visit next, enabling loops, tool use, and multi-step workflows that can stop only when the agent reaches an explicit “finish” condition. That stateful routing matters because real agents rarely follow a straight line: they need to decide when to call tools, when to ask another model turn, and when they’re done.

At the core are three building blocks: a persisted state, nodes, and edges. The state graph keeps information across the agent’s lifecycle—such as the user input, chat history, and “intermediate steps” (like prior tool actions and results). State updates can overwrite values or accumulate new items (for example, appending to a list of intermediate actions as the agent iterates). Nodes represent the functional components wired into the agent: they can be chains, runnable components, or tools. Edges connect nodes, either as hardwired transitions (always go from one node to another) or as conditional edges where an LLM-backed function decides the next hop. Conditional routing is what lets the agent choose between calling a tool, switching persona, delegating to a supervisor, or ending the run.

Once nodes and edges are defined, the graph is compiled into something that behaves like a standard LangChain runnable—supporting calls like invoke and streaming execution. An entry node defines where the run begins, and the graph’s logic determines how it progresses through tool calls and model turns until an end condition is reached.

The transcript then walks through three progressively richer examples. The first builds a classic “agent executor” style loop using OpenAI function calling. Two simple custom tools are created: one returns a random number between 0 and 100, and another converts text to lowercase. A graph is assembled with an “agent” node that produces an action (which tool to call) and an “action/tools” node that executes the chosen tool and records results in intermediate steps. A conditional function checks whether the agent outcome is an “agent finish” (stop) or a tool action (continue). Streaming shows the alternating pattern: agent decides → tool runs → agent resumes until the final answer is produced. A second variant uses chat messages as the state instead of a separate intermediate-steps structure, binding tool functions directly to the model and using the presence of a function call in the last message to decide whether to continue.

The third example introduces supervision: a router-like supervisor model delegates work to multiple worker agents with different roles. One worker (“lotto manager”) uses tools to generate 10 random lottery numbers; another (“coder”) uses the PythonREPL tool to generate a histogram plot with matplotlib. The supervisor uses function calling to choose which worker acts next and when to finish, creating a multi-agent workflow that loops through workers and returns a final human-readable summary. LangSmith traces are used to inspect routing decisions, tool calls, and message sequences.

Overall, LangGraph is framed as a practical state-machine approach to agent orchestration: once the node roles, conditional edges, and hardwired return paths are clear, complex agent behaviors—tool loops, message management, and multi-persona supervision—become easier to build and debug.

Cornell Notes

LangGraph turns LLM agent logic into a graph-based state machine. A persisted state carries inputs, chat history, and (optionally) intermediate tool/action results across multiple model and tool steps. Nodes act as the agent’s components (tools, chains, or runnables), while edges wire them together—hardwired edges enforce fixed transitions and conditional edges let an LLM decide the next node or when to stop. Compiling the graph produces a runnable that supports invoke and streaming. Examples show a tool-calling loop (agent → tool → agent until “agent finish”), a message-list variant that decides continuation based on function calls in the last message, and a supervisor setup that routes between multiple worker agents (lotto manager and coder) until the supervisor returns “finished.”

How does LangGraph’s “graph” differ from a simple chain or a fixed directed workflow?

LangGraph uses nodes connected by edges where transitions can be conditional. Nodes can decide which node to visit next, enabling loops and dynamic routing rather than a one-pass sequence. The transcript describes this as “not always directed” and “not a DAG,” because the next step depends on decisions made during execution (often via an LLM-backed function). That’s why it behaves like a state machine: the graph tracks what state the agent is in, runs a chain/tool for that state, then routes back or onward until an explicit completion condition is met.

What does “state” mean in LangGraph, and what kinds of updates happen to it?

State is persisted across the agent’s lifecycle and passed between nodes. In the first notebook-style example, the state includes input, chat history, and intermediate steps. Intermediate steps are accumulated by appending agent actions and tool results (using an operator.add pattern). State updates can overwrite values (like agent outcome/finish signals) or add to collections (like growing a list of intermediate actions). In the second example, state is simplified to a list of messages, and intermediate steps are no longer stored separately.

How do conditional edges decide whether the agent should call a tool again or stop?

Conditional edges rely on a function that inspects the most recent agent outcome (or last message). In the first example, a should_continue function checks whether the agent outcome is an “agent finish.” If it’s a finish, the graph ends; otherwise it routes to the action/tools node to execute the next tool and then returns to the agent node. In the message-list variant, the should_continue logic checks whether the last message contains a function call; if not, it ends immediately.

What’s the practical difference between the first tool-loop example and the second message-based example?

Both implement the same core loop: agent decides a tool call → tool executes → agent continues until finish. The difference is how state is represented and how continuation is detected. The first example persists intermediate steps and uses an agent outcome object to decide finish vs continue. The second example uses a chat model with a messages list as state, binds tool functions to the model, and decides continuation by detecting function calls in the last message. It also highlights message management options like limiting context length by keeping only the last N messages.

How does the supervisor pattern work in the multi-agent example?

A supervisor agent acts as a router that decides which worker agent should run next and when to finish. The supervisor uses function calling with a constrained set of options (e.g., lotto manager, coder, finish). After each worker responds, edges route back to the supervisor, and the supervisor’s conditional routing selects the next worker or ends the workflow. In the example, the lotto manager generates 10 random numbers using the random number tool, while the coder uses PythonREPL to generate a matplotlib histogram, and the supervisor returns a final summary.

Review Questions

  1. In LangGraph, what information should be stored in state to support multi-step tool use, and how does that differ between the intermediate-steps approach and the messages-list approach?
  2. Describe the role of conditional edges in preventing infinite loops. What signals are checked to decide between continuing tool execution and ending with an agent finish?
  3. In the supervisor setup, how does the router determine whether to delegate to lotto manager, coder, or finish, and how does the graph ensure control returns to the supervisor after each worker run?

Key Points

  1. 1

    LangGraph orchestrates LLM agents as a graph-based state machine with dynamic routing rather than a fixed linear chain.

  2. 2

    Persisted state can include chat history and intermediate tool/action results, and it can be overwritten or appended to as the agent runs.

  3. 3

    Nodes are the agent’s components (tools, chains, or runnables), while edges define transitions—hardwired for fixed returns and conditional for LLM-driven decisions.

  4. 4

    Compiling a graph produces a runnable that supports invoke and streaming, with an entry node and an end condition controlled by the routing logic.

  5. 5

    A tool-calling loop can be implemented as agent → tool/action → agent, with conditional edges stopping when an “agent finish” outcome is reached.

  6. 6

    A message-list variant simplifies state by storing messages and deciding continuation based on whether the last message contains a function call.

  7. 7

    Multi-agent supervision uses a router/supervisor that selects among worker agents (and finish) via function calling, looping through workers until completion.

Highlights

LangGraph treats agent behavior like a state machine: nodes run, edges route, and the graph ends only when the agent reaches a finish condition.
Conditional edges are the mechanism for tool selection and stopping—often driven by inspecting the agent outcome or the last message’s function call.
The supervisor example demonstrates multi-agent orchestration: a router delegates to a lotto manager and a coder, then returns a final human-readable result after “finished.”
Switching from intermediate-steps state to a messages list changes how continuation is detected, but preserves the same agent/tool/agent loop structure.

Topics

  • LangGraph Crash Course
  • Agent State Machine
  • Tool Calling Loops
  • Conditional Edges
  • Multi-Agent Supervision

Mentioned