Tutorials 2-Live Getting Started With LangGraph For Building AI Agents
Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Agentic AI is treated as a coordinated framework of multiple AI agents working toward a larger goal, while AI agents are narrower task-focused software components.
Briefing
Agentic AI is framed as a higher-level system that coordinates multiple AI agents to pursue a larger goal, while “AI agents” are treated as narrower software components that handle a specific task with some degree of autonomy. That distinction matters because it changes how developers design workflows: single-agent setups can call tools to fetch missing information, but agentic systems orchestrate several specialized steps—often with conditional routing and iterative feedback—until a final objective is reached.
The session begins by drawing a line between the two concepts. An AI agent is described as software built to perform a specific task without human intervention (or with limited human involvement). Agentic AI, by contrast, is presented as a framework where multiple agents collaborate and make decisions independently to achieve a broader goal. The practical implication is that agentic AI is not just “more capable prompting”; it’s a structured automation pattern where different components take responsibility for different parts of a workflow.
To make the difference concrete, the tutorial uses an LLM example. When an LLM can’t answer a question like “recent news for April 11th” or “current temperature of Bangalore,” it needs external information. The workflow becomes an “agent” pattern when the model decides to query an external tool such as an API, database, or a research-paper index (the transcript mentions an RSV-style tool for papers). The autonomy shows up in the decision of when to call the tool and how to return structured output (e.g., JSON) back to the user.
Then the discussion scales up to agentic AI using a YouTube-to-blog scenario. Instead of one person doing everything—transcript creation, title and description writing, code/content formatting, and a conclusion—the workflow is split across multiple agents. One agent converts a YouTube video URL into a transcript. A second agent turns that transcript into blog components (title, description, code section, conclusion). The transcript also highlights that agentic workflows can loop with feedback—regenerating outputs after review—so the system can converge on a better result.
With the conceptual groundwork laid, the session transitions to LangGraph and why it’s useful for building these workflows. LangGraph is positioned as stable for agentic application development, with concepts that map directly onto workflow design: nodes (Python functions/tasks), edges (connections between nodes), conditional edges (routing based on conditions), state (a shared schema that carries variables through the graph), and the state graph (the overall structure). A simple “morning play” example is used to demonstrate the mechanics: a start node sets a message, a conditional node randomly chooses between badminton and cricket, and the chosen node appends the final decision before the graph ends.
Finally, the tutorial shows how to implement and run the graph: define a state schema using a typed dictionary, write node functions, add conditional routing logic, construct the state graph, compile it, visualize it with a Mermaid diagram, and execute it via an invoke call that passes the initial state (e.g., a graph_info string). The result is a working demonstration of how LangGraph turns branching workflow logic into executable agentic pipelines—setting up more complex agent coordination in later sessions.
Cornell Notes
The transcript distinguishes “AI agents” from “agentic AI” and then uses LangGraph to demonstrate how agentic workflows are built. AI agents are described as software that performs a specific task with limited or no human intervention, often by calling external tools when the model lacks information. Agentic AI is presented as a framework where multiple agents collaborate and make independent decisions to reach a larger goal, such as converting a YouTube video into a blog through multiple specialized steps. LangGraph’s core building blocks—nodes (Python functions), edges (flow), conditional edges (branching), state (shared schema), and the state graph (structure)—are used to implement a branching example (play badminton vs. cricket) and execute it via compile + invoke.
What is the practical difference between an AI agent and agentic AI in workflow design?
How does the transcript justify calling an LLM+API interaction an “AI agent”?
Why does the YouTube-to-blog example require multiple agents rather than one?
What are LangGraph’s key components, and how do they map to the workflow?
How does the “play badminton vs. cricket” example implement conditional routing?
What does it mean to compile and execute a LangGraph graph in this tutorial?
Review Questions
- How would you redesign the YouTube-to-blog workflow if you wanted the system to regenerate the transcript only when the blog agent flags missing details?
- In the LangGraph example, what role does the state schema play in ensuring each node can read and update shared information?
- What changes would be required in the graph if you added a third sport option (e.g., tennis) with conditional routing?
Key Points
- 1
Agentic AI is treated as a coordinated framework of multiple AI agents working toward a larger goal, while AI agents are narrower task-focused software components.
- 2
LLMs become agent-like when they decide to call external tools (APIs/databases) to fetch information they don’t have and then return structured results.
- 3
A YouTube-to-blog workflow illustrates agentic AI by splitting responsibilities across multiple agents and passing outputs between them.
- 4
LangGraph’s core primitives—nodes, edges, conditional edges, state, and state graph—map directly onto how branching agent workflows are built.
- 5
Conditional routing in LangGraph can be implemented by a function that returns a literal value used to select the next node.
- 6
Building a LangGraph workflow typically follows a pipeline: define state schema → write node functions → add edges/conditional edges → construct state graph → compile → visualize → invoke with initial state.