Get AI summaries of any video or article — Sign up free
Day 4- Python From Start- Building End To End Gen AI And Agentic AI Projects Skeleton thumbnail

Day 4- Python From Start- Building End To End Gen AI And Agentic AI Projects Skeleton

Krish Naik·
5 min read

Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI agents are framed as LLM-based systems that can autonomously call external tools (e.g., Google Search API) when internal knowledge is insufficient.

Briefing

Agentic AI is framed as a shift from single, chatbot-style responses to autonomous, multi-agent workflows—where several specialized AI agents coordinate to complete complex tasks end to end. The core distinction starts with “AI agents,” illustrated as an LLM that can take action (for example, using a Google Search API when its training data is outdated) rather than merely generating text. In that setup, the agent decides when it needs external tools, calls them, and returns results with minimal human involvement.

From there, agentic AI is presented as a higher-level framework built for “full autonomy” and multi-step coordination. Instead of one agent performing one action, multiple agents interact inside a shared workflow to achieve a common goal. The transcript uses a software-development analogy: requirement gathering, sprint planning, code development, peer review, and testing. In a traditional team, different people handle these stages; in agentic AI, separate agents can take on roles like “coder 1,” “coder 2,” a “lead reviewer,” and a “tester,” with handoffs and feedback loops running automatically. The practical implication is that the project lifecycle changes—planning, execution, review, and validation become orchestrated steps rather than a linear prompt-and-response flow.

Once the difference is established, the session moves into the lifecycle of an agentic AI system, with special attention to Retrieval-Augmented Generation (RAG) as a recurring component for text-heavy use cases like customer support and other repetitive tasks. The RAG pipeline is laid out as a concrete sequence: ingest data (PDFs, HTML, databases, etc.) via a data loader, split text into chunks, embed those chunks into vectors using text embedding techniques, store vectors in a vector database (examples named include Pinecone, DataStax, and MongoDB, with ChromaDB shown later), and expose a retrieval interface. That retrieval becomes a tool that an agentic system can call during execution.

The transcript then demonstrates end-to-end “agentic” project skeletons using LangGraph-style graphs, where nodes represent agents and edges represent transitions between steps. A key claim is that graphs make complex workflows easier to control, because execution flow is explicit: data fetch → summarization → saving results → previewing output, and similar multi-step flows for other use cases.

Two live-style examples illustrate the skeleton: an AI news assistant that fetches daily AI news via an API, summarizes it, and provides downloadable summaries; and a travel planner that generates a multi-day itinerary (including activities, maps, and estimated totals) and can optionally proceed to actions like booking flights through an Expedia integration when API access exists. The overall message is that these modular workflows—built from Python foundations, then orchestrated with agent frameworks like LangGraph—enable practical applications that can pull in third-party information, perform structured multi-step tasks, and coordinate specialized agents to deliver complete outcomes rather than isolated answers.

Cornell Notes

The transcript distinguishes AI agents from agentic AI by focusing on autonomy and coordination. An AI agent is described as an LLM that can take actions—such as calling a Google Search API when it can’t answer from its training data—without requiring a human to manually fetch information. Agentic AI is presented as a framework with multiple agents interacting to complete a complex workflow, with handoffs and feedback loops resembling roles in a software team (developers, lead review, testing). For agentic AI systems, RAG is treated as a key text pipeline: load data, chunk text, embed into vectors, store in a vector database, and expose retrieval as a tool for the agents. LangGraph is emphasized for controlling execution flow using nodes (agents) and edges (transitions).

How does the transcript define an “AI agent” using the chatbot example?

A chatbot is treated as an LLM that can answer only within its training knowledge. When asked for “recent AI news” beyond its cutoff (e.g., trained only up to 2024), it can’t answer directly. The example becomes an AI agent when the system gains access to a third-party source—specifically a Google Search API—so it can decide it needs external information, call the API, and return results. The defining feature is task execution with autonomy: the system performs the tool call and produces the outcome without requiring a human to manually fetch data.

What changes when moving from AI agents to agentic AI?

The transcript says agentic AI is a framework with “full autonomy” and multiple AI agents working together. Instead of one agent performing one action, several agents coordinate to solve a complex workflow. The software-team analogy maps roles to agents: one agent handles coding tasks (e.g., “coder 1” and “coder 2”), another agent performs peer review (lead), and testing can run in parallel. Agents communicate and hand off outputs automatically, forming a loop of development → review → fixes → testing until the workflow completes.

Why is RAG treated as a core component in agentic AI systems for text use cases?

RAG is positioned as the mechanism for grounding agentic workflows in external or proprietary text data. The pipeline starts with a data ingestion loader that can read PDFs, HTML, or database content. Text is then converted into chunks, embedded into vectors using text embedding techniques, and stored in a vector database (vector stores are named, and ChromaDB appears in the later project structure). A retrieval tool interfaces with the vector database so agents can fetch relevant context during execution.

How does LangGraph’s “graph” idea relate to the lifecycle of an agentic AI system?

LangGraph is described as using graphs (also called directed/acyclic graph behavior in the transcript) where nodes act as AI agents and edges represent transitions between steps. This makes the execution flow explicit and controllable: information moves from one node to the next according to the edges. The transcript contrasts this with starting too quickly with other frameworks, arguing LangGraph helps visualize how agents and transitions work behind the scenes.

What are the two end-to-end example skeletons shown, and what steps do they include?

First, an AI news assistant: it selects a time frame (e.g., daily), fetches news via an API, summarizes the items, saves results, and allows preview and download of summaries. Second, a travel planner: it takes a destination preference (e.g., Goa), generates an itinerary with day-by-day activities and logistics (including map links and hotel check-in), estimates totals, and can optionally proceed to booking flights using Expedia if API access is available. Both examples follow a multi-step workflow rather than a single response.

Review Questions

  1. What specific capability distinguishes an AI agent from a basic chatbot in the transcript’s Google Search API example?
  2. List the RAG pipeline steps in order as described (from data ingestion to retrieval tool).
  3. In LangGraph terms, what do nodes and edges represent, and how does that help manage complex workflows?

Key Points

  1. 1

    AI agents are framed as LLM-based systems that can autonomously call external tools (e.g., Google Search API) when internal knowledge is insufficient.

  2. 2

    Agentic AI is framed as a framework where multiple agents coordinate with full autonomy to complete complex workflows.

  3. 3

    The software-team analogy maps agent roles to workflow stages: requirement gathering, coding, peer review, and testing with automated handoffs.

  4. 4

    For text-heavy tasks, RAG is presented as a repeatable pipeline: load data → chunk → embed → store vectors → retrieve via a tool.

  5. 5

    LangGraph is emphasized for orchestrating agentic workflows using nodes (agents) and edges (transitions) to control execution flow.

  6. 6

    End-to-end skeletons are demonstrated with multi-step applications like an AI news summarizer and a travel planner that can optionally integrate booking actions via APIs.

Highlights

The transcript’s “agent” example turns a chatbot into an AI agent by adding tool access: when asked for recent news, it can query Google Search API and return updated results.
Agentic AI is positioned as multi-agent coordination—multiple specialized agents interact to complete a workflow, not just generate text.
RAG is laid out as a concrete engineering pipeline (data loader → chunking → embeddings → vector DB → retrieval tool) designed for text grounding.
LangGraph’s graph structure (nodes as agents, edges as transitions) is presented as the mechanism for controlling complex, multi-step agent execution.
Two practical skeletons—AI news summarization and a travel planner—show how fetch → process → save/preview can be orchestrated end to end.

Topics

Mentioned