Day 4- Python From Start- Building End To End Gen AI And Agentic AI Projects Skeleton
Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI agents are framed as LLM-based systems that can autonomously call external tools (e.g., Google Search API) when internal knowledge is insufficient.
Briefing
Agentic AI is framed as a shift from single, chatbot-style responses to autonomous, multi-agent workflows—where several specialized AI agents coordinate to complete complex tasks end to end. The core distinction starts with “AI agents,” illustrated as an LLM that can take action (for example, using a Google Search API when its training data is outdated) rather than merely generating text. In that setup, the agent decides when it needs external tools, calls them, and returns results with minimal human involvement.
From there, agentic AI is presented as a higher-level framework built for “full autonomy” and multi-step coordination. Instead of one agent performing one action, multiple agents interact inside a shared workflow to achieve a common goal. The transcript uses a software-development analogy: requirement gathering, sprint planning, code development, peer review, and testing. In a traditional team, different people handle these stages; in agentic AI, separate agents can take on roles like “coder 1,” “coder 2,” a “lead reviewer,” and a “tester,” with handoffs and feedback loops running automatically. The practical implication is that the project lifecycle changes—planning, execution, review, and validation become orchestrated steps rather than a linear prompt-and-response flow.
Once the difference is established, the session moves into the lifecycle of an agentic AI system, with special attention to Retrieval-Augmented Generation (RAG) as a recurring component for text-heavy use cases like customer support and other repetitive tasks. The RAG pipeline is laid out as a concrete sequence: ingest data (PDFs, HTML, databases, etc.) via a data loader, split text into chunks, embed those chunks into vectors using text embedding techniques, store vectors in a vector database (examples named include Pinecone, DataStax, and MongoDB, with ChromaDB shown later), and expose a retrieval interface. That retrieval becomes a tool that an agentic system can call during execution.
The transcript then demonstrates end-to-end “agentic” project skeletons using LangGraph-style graphs, where nodes represent agents and edges represent transitions between steps. A key claim is that graphs make complex workflows easier to control, because execution flow is explicit: data fetch → summarization → saving results → previewing output, and similar multi-step flows for other use cases.
Two live-style examples illustrate the skeleton: an AI news assistant that fetches daily AI news via an API, summarizes it, and provides downloadable summaries; and a travel planner that generates a multi-day itinerary (including activities, maps, and estimated totals) and can optionally proceed to actions like booking flights through an Expedia integration when API access exists. The overall message is that these modular workflows—built from Python foundations, then orchestrated with agent frameworks like LangGraph—enable practical applications that can pull in third-party information, perform structured multi-step tasks, and coordinate specialized agents to deliver complete outcomes rather than isolated answers.
Cornell Notes
The transcript distinguishes AI agents from agentic AI by focusing on autonomy and coordination. An AI agent is described as an LLM that can take actions—such as calling a Google Search API when it can’t answer from its training data—without requiring a human to manually fetch information. Agentic AI is presented as a framework with multiple agents interacting to complete a complex workflow, with handoffs and feedback loops resembling roles in a software team (developers, lead review, testing). For agentic AI systems, RAG is treated as a key text pipeline: load data, chunk text, embed into vectors, store in a vector database, and expose retrieval as a tool for the agents. LangGraph is emphasized for controlling execution flow using nodes (agents) and edges (transitions).
How does the transcript define an “AI agent” using the chatbot example?
What changes when moving from AI agents to agentic AI?
Why is RAG treated as a core component in agentic AI systems for text use cases?
How does LangGraph’s “graph” idea relate to the lifecycle of an agentic AI system?
What are the two end-to-end example skeletons shown, and what steps do they include?
Review Questions
- What specific capability distinguishes an AI agent from a basic chatbot in the transcript’s Google Search API example?
- List the RAG pipeline steps in order as described (from data ingestion to retrieval tool).
- In LangGraph terms, what do nodes and edges represent, and how does that help manage complex workflows?
Key Points
- 1
AI agents are framed as LLM-based systems that can autonomously call external tools (e.g., Google Search API) when internal knowledge is insufficient.
- 2
Agentic AI is framed as a framework where multiple agents coordinate with full autonomy to complete complex workflows.
- 3
The software-team analogy maps agent roles to workflow stages: requirement gathering, coding, peer review, and testing with automated handoffs.
- 4
For text-heavy tasks, RAG is presented as a repeatable pipeline: load data → chunk → embed → store vectors → retrieve via a tool.
- 5
LangGraph is emphasized for orchestrating agentic workflows using nodes (agents) and edges (transitions) to control execution flow.
- 6
End-to-end skeletons are demonstrated with multi-step applications like an AI news summarizer and a travel planner that can optionally integrate booking actions via APIs.