Creating an AI Agent with LangGraph Llama 3 & Groq
Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
LangGraph models an agent as a graph of nodes connected by edges, with conditional edges controlling which allowed path runs next.
Briefing
LangGraph is positioned as the “middle layer” for building AI agents that need structure, state, and controllable decision points—without handing over everything to fully autonomous systems. The core idea is to model an agent as a graph of nodes connected by edges, where a shared state object carries variables (like an email draft, category, and research results) through each step. That structure matters because it enables monitoring, logging, and tight flow control: the system can only move along paths the developer explicitly allows, reducing runaway loops and making debugging more about changing the graph than rewriting prompts.
The walkthrough builds an email-reply agent that mirrors a simpler CrewAI version but adds more production-style control. The workflow starts with an initial customer email, then categorizes it (e.g., customer complaint). A conditional “research router” decides whether to perform web research (via Tavily search) or skip straight to drafting. If research is needed, the agent generates search keywords from the email and category, runs the search, and stores the retrieved information back into the shared state. Next comes a draft email writer that uses the category plus any research info to produce a reply in a structured JSON format.
A second conditional decision introduces a reflection step: a “rewrite checker” returns a binary choice—rewrite or no rewrite. When the draft is deemed poor (the demo forces this by overwriting the draft with a deliberately bad version), the graph routes to a draft analysis node that identifies issues and then to a rewrite node that produces an improved email. If no rewrite is needed, the graph bypasses the analysis and rewrite nodes and simply converts the draft into the final email. In both cases, the flow converges on a state printer node that outputs the accumulated state variables (initial email, category, draft, final email, research info, and a step counter), then ends.
A major emphasis falls on LangGraph’s engineering advantages over more autonomous agent patterns. By hardwiring edges and using conditional edges for specific branching decisions, the system can’t “choose” to jump back into earlier steps unless the graph explicitly permits it. That constraint prevents the kind of uncontrolled looping that often appears in fully autonomous setups, where debugging can devolve into prompt tinkering. The tradeoff is more code and more upfront design: the graph should be diagrammed first, then implemented by defining chains (prompt+model+parsers), nodes (functions that run chains/tools), conditional edges (routing logic), and state (the variables passed between steps).
The implementation uses Llama 3 70B on Groq for the model calls, LangChain chains for each subtask (categorization, routing, keyword generation, drafting, rewriting), and JSON output parsing to keep intermediate results machine-readable. The final result is a controllable agent that can be extended—swapping the web search step for a RAG system, adding brand-standard checks, or adding more decision points—while retaining the same core graph-and-state approach.
Cornell Notes
LangGraph is used to build an email-reply AI agent as a structured state machine: nodes run tasks (LLM chains or tools), edges connect them, and conditional edges route execution based on explicit decisions. A shared state object carries variables like the email category, research results, draft email, and final email across the graph. The workflow categorizes the email, conditionally performs web research with Tavily search, drafts a reply, then conditionally rewrites it using a reflection step. This matters because the agent’s behavior stays within allowed paths, making monitoring and debugging easier than fully autonomous agent loops. The example runs Llama 3 70B on Groq and uses JSON output parsing to keep intermediate outputs reliable.
How does LangGraph sit between LangChain and fully autonomous agents, and why is that placement useful?
What are nodes, edges, conditional edges, and state in this agent design?
How does the agent decide whether to do web research before drafting a reply?
What triggers the rewrite step, and how is the rewrite implemented?
Why does the graph design reduce runaway behavior compared with more autonomous agent approaches?
What role do JSON output parsing and structured intermediate outputs play here?
Review Questions
- In what ways does conditional routing (research vs. no research, rewrite vs. no rewrite) change the agent’s behavior compared with a linear chain-only design?
- How does shared state enable later nodes to reuse earlier outputs, and what kinds of variables are stored in this example?
- What debugging advantage comes from constraining execution to allowed graph paths, and how does that compare to prompt-based debugging in autonomous agents?
Key Points
- 1
LangGraph models an agent as a graph of nodes connected by edges, with conditional edges controlling which allowed path runs next.
- 2
A shared state object carries variables (category, research info, draft, final email, step count) across nodes so later steps can reuse earlier outputs.
- 3
The email agent categorizes the input first, then uses a conditional router to decide whether to run web research via Tavily search.
- 4
Drafting is followed by a second conditional decision that either finalizes the draft or triggers a reflection-and-rewrite subflow.
- 5
Hardwired edges and explicit routing prevent uncontrolled loops; the agent can’t revisit earlier steps unless the graph explicitly permits it.
- 6
Structured JSON outputs and parsers keep intermediate results machine-readable, improving routing reliability and state updates.
- 7
The design is intentionally extensible: the research step can be swapped for a RAG system, and additional decision checks (brand standards, competitor mentions) can be added as new nodes and conditional edges.