Get AI summaries of any video or article — Sign up free
Getting Started with LangGraph | Build Local Agentic Workflows and AI Agents with Ollama thumbnail

Getting Started with LangGraph | Build Local Agentic Workflows and AI Agents with Ollama

Venelin Valkov·
5 min read

Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

LangGraph replaces brittle nested branching with explicit graph structure: nodes do single tasks and edges define execution paths.

Briefing

LangGraph is presented as a practical way to turn brittle, demo-only AI prototypes into maintainable agentic systems by replacing nested if/else logic with explicit state and a graph of small, testable steps. The core idea is to model an AI support process as a workflow graph: each node performs one job (classify, retrieve, draft, evaluate, revise), while edges define the execution path. That structure makes behavior predictable, easier to debug, and simpler to iterate—especially once the number of steps grows beyond a handful.

The tutorial uses an intelligent support ticket scenario to show why workflows often beat “free-form” agent behavior at first. The workflow starts by defining a state object (a data class) holding fields like ticket classification, retrieved documents, the drafted response, evaluation feedback, and a revision counter. It then runs a sequence of language-model calls: classify the ticket text, retrieve relevant knowledge base snippets using an in-memory vector store, draft a response grounded in the retrieved context, and evaluate whether the draft fully addresses the ticket. If evaluation returns “fail,” a conditional edge routes execution back into a revision step; if it returns “pass,” the graph ends. LangGraph’s state graph also supports visual debugging via Mermaid diagrams exported as PNG, and the compiled graph can be invoked with an initial state containing the ticket text (example: “my login is broken please help”).

A key takeaway is the decision framework for when to use a workflow versus an AI agent. Workflows keep the developer in control: every step and branch is explicitly defined, which improves reliability and troubleshooting. Agents, by contrast, delegate tool selection and step ordering to the model, which can be harder to constrain when a business process requires a specific sequence. To demonstrate an agent conversion, the tutorial applies the Reason Act Observe (ReAct) pattern (referenced as proposed in a Google paper) and builds custom tools that correspond to the workflow’s nodes—classification, retrieval, and drafting. The agent runs a loop where it reasons about the ticket, calls tools in sequence, and produces a final draft response. In this agent version, state is simplified to a running list of messages, and a conditional edge determines whether the agent should continue calling tools or stop.

The result is two implementations of the same support system: a controlled workflow with revision cycles and evaluation gates, and a ReAct-style agent that performs tool-driven steps with less direct flow control. The closing guidance is pragmatic: start with a workflow for most real-world processes, add sub-agents only when needed, and switch to agentic autonomy only after extensive testing, debugging, and prompt/tool tuning—often with human oversight still in the loop for production-grade reliability.

Cornell Notes

LangGraph helps build maintainable AI systems by modeling them as graphs with explicit state and step-by-step execution. The tutorial demonstrates an intelligent support ticket workflow: classify the ticket, retrieve grounded knowledge, draft a response, evaluate it, and conditionally revise until the draft passes. This workflow approach keeps developers in control, making debugging and iteration easier than sprawling if/else logic. The same logic is then converted into a ReAct-style AI agent by turning workflow nodes into tools and letting the model choose tool calls in sequence. The tradeoff is clear: workflows are predictable and easier to validate, while agents are more autonomous but harder to constrain and productionize.

Why does the tutorial recommend starting with a workflow instead of an agent for support-ticket automation?

Workflows keep the developer in control of each step and branch, producing a fixed and predictable execution path. That predictability makes debugging and improvement more straightforward—critical when a business process must follow a specific order (classify → retrieve → draft → evaluate → revise). Agents delegate tool selection and ordering to the model, which can break down when strict sequencing is required, even with strong models.

What does “state” mean in LangGraph, and how is it used in the support-ticket example?

State is the system’s memory for the graph run. In the example, it’s implemented as a data class holding fields such as ticket classification, retrieved documents, drafted response, evaluation feedback, and a revision counter. Each node returns updates (often as dicts), and LangGraph maps those updates into the defined state structure automatically.

How does the workflow decide whether to revise the drafted response or end execution?

After drafting, an evaluation node runs an evaluation prompt that checks whether the draft fully addresses the ticket. The evaluation returns a pass/fail decision plus feedback. A conditional edge uses a should-revise function: if the result is fail, execution loops back to the revise node; if pass, the graph transitions to the end state.

How is the knowledge retrieval implemented for the workflow demo?

The tutorial uses an in-memory vector store as a simplified retrieval setup. It seeds an internal knowledge base with a few example sentences, then retrieves relevant snippets based on the ticket text. Those retrieved documents are passed into the drafting step so the response is grounded in the knowledge base.

What changes when converting the workflow into an AI agent using ReAct?

Workflow nodes become agent tools. Instead of passing a full structured state through multiple nodes, the agent maintains a simpler state: a list of messages. A ReAct-style system prompt instructs the agent to classify, retrieve, and draft using the provided tools in sequence. A conditional edge controls whether the agent continues tool calls or stops and returns the final draft response.

What role does human oversight play in the tutorial’s production guidance?

Even with improving models, human-in-the-loop oversight is still emphasized as important for real-world deployments. It helps double-check or guide outputs from agentic workflows/agents, improving reliability and results when accuracy and compliance matter.

Review Questions

  1. In the workflow version, which specific state fields are updated across classification, retrieval, drafting, evaluation, and revision—and how do those updates enable conditional looping?
  2. What practical limitations of agent autonomy motivate the tutorial’s workflow-first recommendation, and how does the conditional edge mechanism address them?
  3. When converting to an agent, how do tool definitions and the ReAct system prompt replace explicit developer-controlled edges in the workflow graph?

Key Points

  1. 1

    LangGraph replaces brittle nested branching with explicit graph structure: nodes do single tasks and edges define execution paths.

  2. 2

    Defining a structured state (e.g., ticket classification, retrieved docs, draft, evaluation feedback, revision count) makes AI behavior easier to inspect and debug.

  3. 3

    A workflow can include evaluation gates and conditional edges to loop on revisions until a pass condition is met.

  4. 4

    Workflows are typically more reliable for business processes that require strict step ordering and predictable branching.

  5. 5

    Agents trade control for autonomy by letting the model decide tool calls; that can be harder to constrain in production.

  6. 6

    Converting a workflow to an agent involves turning workflow nodes into tools and using a ReAct-style prompt to guide tool usage order.

  7. 7

    Production readiness still benefits from human oversight and thorough testing/prompt tuning, even when models are strong.

Highlights

LangGraph’s state + node/edge design turns a multi-step support-ticket pipeline into a debuggable, maintainable system instead of a tangle of if/else logic.
The revision loop is driven by an evaluation node that returns pass/fail plus feedback, with a conditional edge routing execution back to revision when needed.
The agent version simplifies state to message history and uses ReAct-style tool calling, guided by a system prompt that enforces classify → retrieve → draft behavior.
Mermaid diagrams and compiled state graphs provide a visual and operational view of how execution flows under the hood.

Topics

Mentioned