Get AI summaries of any video or article — Sign up free
Generative AI Vs Agentic AI Vs AI Agents thumbnail

Generative AI Vs Agentic AI Vs AI Agents

Krish Naik·
4 min read

Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Generative AI uses large models to produce new content (text, images, audio, video frames) from prompts.

Briefing

Generative AI, AI agents, and agentic AI differ mainly in how far the system goes beyond producing text—and how it handles tasks that require outside information or multi-step workflows. At the foundation are large language models (and large image models), trained on massive datasets to generate new content—text, images, audio, or video frames—based on prompts. In a typical generative AI application, the user supplies instructions (a prompt), and the model responds with the requested output. The key constraint is that the model’s knowledge is limited to what it learned during training; without external connections, it can’t reliably answer “current” questions like today’s news or live sports results.

That limitation is where AI agents come in. An AI agent uses the LLM as a decision-making layer for tool use. When a user asks for information the LLM can’t produce from its training data, the system can perform a “tool call” to an external data source—such as an internet search API (the transcript uses Tavily as an example). The agent identifies which tool is needed, calls it, receives the external response, and then summarizes that information into a final answer. In this setup, the agent is effectively solving a specific task end-to-end: request → tool call → external response → synthesized output.

Agentic AI expands the idea from single-task tool use to coordinated, multi-step execution. Instead of one agent handling one job, an agentic system breaks a complex goal into subtasks and assigns them to multiple agents that work together. The transcript uses an example: converting a YouTube video into a blog post. One agent can fetch the transcript, another can draft the title, a third can write the description, and a fourth can produce the conclusion. These agents can run in parallel where appropriate, then their outputs are combined into the final blog. Crucially, agents can also pass information between each other—for instance, the description-writing agent may need the title produced by another agent.

Agentic AI also allows human feedback to be inserted into the workflow, giving people a way to review, correct, or steer outputs during execution. The practical takeaway is that AI agents focus on completing a particular task by calling external tools when needed, while agentic AI orchestrates multiple agents to collaborate on a larger workflow toward a shared goal. Understanding that distinction matters because it determines how systems are designed: prompt-driven generation alone, tool-augmented task completion, or multi-agent coordination for complex real-world processes.

Cornell Notes

Large language models (and other large models) generate new content from prompts, but they don’t inherently know up-to-the-minute facts like today’s news or live sports results. AI agents address this by using the LLM to decide when to make a “tool call” to external sources (e.g., an internet search API such as Tavily), then summarizing the returned data into a final answer. Agentic AI goes further by coordinating multiple agents to complete a complex goal through multiple subtasks. In the YouTube-to-blog example, separate agents handle transcript retrieval, title creation, description writing, and conclusion drafting, with outputs combined into one publishable post. This matters because it changes system design from single-step generation to collaborative, workflow-driven execution.

What makes generative AI different from systems that need current information?

Generative AI relies on large models trained on historical data to generate new content from prompts. That training limits what the model can know about “today” or private/company-specific information. Without external data access, the model can’t reliably answer questions like current news or who won a match today.

How does an AI agent turn an LLM into something that can answer real-time questions?

An AI agent uses the LLM to decide whether external tools are needed. When the LLM can’t answer from training data, it performs a “tool call” to a third-party data source. The transcript’s example uses Tavily as an internet search API: the agent calls Tavily, receives the search results, and then summarizes them into the final response.

Why is “tool call” central to the idea of an AI agent?

The tool call is the mechanism that bridges the LLM’s limitations. Instead of forcing the model to guess, the system checks for available external sources and calls the appropriate tool. After the tool returns a response, the LLM synthesizes that response into an answer.

What changes when moving from an AI agent to agentic AI?

An AI agent is framed as completing one task (request → tool call → response → summary). Agentic AI treats a larger goal as a workflow made of multiple subtasks, then uses multiple agents that collaborate—often passing outputs between them—to produce a final result.

How does the YouTube-to-blog example illustrate agentic AI?

The workflow is split into steps: one agent extracts the transcript from the YouTube URL, another creates the title, another drafts the description, and another writes the conclusion. These agents can run in parallel and then combine their outputs into a single blog post. Information can also flow between agents, such as using the generated title when writing the description.

Where does human feedback fit into agentic AI?

Human feedback can be inserted into the multi-agent workflow to review or steer outputs. That makes the system more controllable for complex tasks where fully automated generation may need oversight.

Review Questions

  1. In what situations would a plain generative AI prompt be insufficient, and what capability does an AI agent add to fix that?
  2. Describe the difference between a single AI agent workflow and an agentic AI workflow using the transcript’s examples.
  3. How do multiple agents coordinate in the YouTube-to-blog scenario, and why is that coordination necessary?

Key Points

  1. 1

    Generative AI uses large models to produce new content (text, images, audio, video frames) from prompts.

  2. 2

    Generative AI alone is limited by training data and typically can’t answer real-time or private information without external access.

  3. 3

    AI agents add tool use: the LLM triggers a “tool call” to fetch missing information from third-party sources like Tavily.

  4. 4

    After receiving external results, an AI agent summarizes them into a final response for a specific task.

  5. 5

    Agentic AI orchestrates multiple agents to complete complex goals by splitting work into subtasks and combining outputs.

  6. 6

    Agents in an agentic system can communicate by passing intermediate results (e.g., title needed for description).

  7. 7

    Human feedback can be integrated into agentic workflows to improve control and correctness.

Highlights

Generative AI is prompt-driven content creation; AI agents add tool calls for missing information; agentic AI coordinates multiple agents for multi-step goals.
A key mechanism in AI agents is the “tool call,” which lets an LLM fetch current data from external APIs such as Tavily.
Agentic AI turns a complex job (like converting a YouTube video into a blog) into parallel and sequential subtasks handled by different agents.

Mentioned