Get AI summaries of any video or article — Sign up free
What is Agentic AI? | Agentic AI using LangGraph | Video 2 | CampusX thumbnail

What is Agentic AI? | Agentic AI using LangGraph | Video 2 | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Agentic AI pursues a user’s goal with proactive planning and execution, unlike reactive chatbots that answer one prompt at a time.

Briefing

Agentic AI is a software paradigm built to take a user’s goal and run toward it with minimal human input—planning, executing steps, adapting when conditions change, and doing so proactively rather than waiting for prompts. In contrast to reactive generative AI chatbots (which answer one question at a time), an agentic system treats the goal like a persistent objective and independently carries out the multi-step work required to reach it.

The clearest way the transcript frames this difference is through two scenarios. In a reactive setup, a person asking about a Goa trip would get step-by-step answers only after each question—dates, transport, hotels, then activities—because the system responds to each prompt in isolation. In an agentic setup, the user provides the overall goal (travel to Goa between two dates), and the system proactively figures out the best route, recommends hotels, and builds an itinerary without repeatedly asking the user for each intermediate decision.

That goal-driven behavior becomes concrete in an HR recruiting example. A recruiter wants to hire a backend engineer with 2–4 years of experience for remote work. The agentic AI chatbot receives the goal, then creates a plan: draft a job description (JD), post it to job platforms (the transcript names LinkedIn), monitor applications, and adapt if the response is weak. If only two candidates apply, it revises the JD (e.g., changing “backend engineer” to “full stack engineer”) and seeks permission to run LinkedIn ads to boost reach. Once applications rise, it parses resumes, shortlists candidates by fit, schedules interviews by checking the recruiter’s calendar, drafts and sends interview emails, and—after the recruiter selects a candidate—generates an offer letter, sends it, monitors acceptance, and triggers onboarding steps like requesting IT access and provisioning a laptop. Human involvement appears mainly at permission points or high-risk moments.

From there, the transcript lays out six core characteristics used to identify whether a system is truly agentic: autonomy (making decisions and taking actions without step-by-step instructions), goal orientation (operating with a persistent objective), planning (breaking a goal into structured sequences and subgoals), reasoning (interpreting information, drawing conclusions, and choosing actions during both planning and execution), adaptability (modifying plans when tools fail, feedback changes, or goals shift), and context awareness (retaining relevant information across a multi-step workflow). It also emphasizes that autonomy must be controllable—through permission scoping, “human-in-the-loop” checkpoints, pause/override controls, and guardrails/policies—because uncontrolled agents can cause costly or biased outcomes.

Finally, the transcript describes how agentic systems are typically built using five high-level components: a “brain” (often an LLM that interprets goals, supports planning/reasoning, selects tools, and handles communication), an orchestrator (sequencing and routing steps, handling retries and loops), tools (APIs and actions like posting jobs, parsing resumes, scheduling, and RAG knowledge access), memory (short-term session state and long-term goals/preferences/policies), and a supervisor (implementing human approvals and escalation/guardrail enforcement). The takeaway is that agentic AI isn’t just smarter chat—it’s an organized system for goal pursuit, with planning-and-execution loops, operational control, and stateful context across time.

Cornell Notes

Agentic AI is designed to take a user’s goal and pursue it with minimal human input. Instead of reacting to each question like a typical chatbot, it operates proactively: it plans a multi-step approach, executes those steps, reasons through decisions, adapts when results or tools change, and keeps track of relevant context across the workflow. The transcript uses an HR recruiting example where an agent drafts a job description, posts it to LinkedIn, monitors applications, revises strategy (including ads with permission), shortlists candidates via resume parsing, schedules interviews via calendar tools, sends offer letters, and triggers onboarding after acceptance. To classify a system as agentic, it should show six traits: autonomy, goal orientation, planning, reasoning, adaptability, and context awareness. It also stresses that autonomy must be controlled using permission scopes, human-in-the-loop checkpoints, overrides, and guardrails.

How does agentic AI differ from reactive generative AI chatbots?

Reactive chatbots respond to each prompt in isolation. In the Goa example, the user must ask step-by-step questions (best travel dates, then transport, then hotels, then activities), and the bot answers each one as it comes. Agentic AI instead receives the overall goal (travel to Goa between two dates) and proactively performs the intermediate work—finding transport, recommending hotels, and building an itinerary—without requiring the user to ask every sub-question.

What does the HR recruiting scenario reveal about an agent’s autonomy?

The agent receives a hiring goal (remote backend engineer, 2–4 years). It then autonomously: drafts a JD using company documents, posts it to LinkedIn via APIs, monitors application volume, and adapts if response is low by revising the JD (e.g., “full stack engineer”) and requesting permission to run LinkedIn ads. After applications increase, it parses resumes, shortlists candidates, schedules interviews by checking the recruiter’s calendar, drafts and sends interview emails, generates an offer letter from templates/documents, sends it via email, monitors acceptance, and initiates onboarding steps like IT access requests and laptop provisioning.

Why is “autonomy” not enough—what controls prevent an agent from going off the rails?

The transcript warns that autonomy can be risky if unconstrained. Control mechanisms include: defining permission scope (limit which tools/actions the agent can perform independently), using “human-in-the-loop” checkpoints for risky actions (e.g., posting ads or sending offer letters), allowing overrides/pause commands to stop or change behavior mid-task, and enforcing guardrails/policies (hard rules like not scheduling interviews on weekends or avoiding informal language in emails).

What are the six characteristics used to identify an agentic AI system?

They are: (1) Autonomy—decisions and actions without step-by-step human instructions; (2) Goal orientation—operates with a persistent objective; (3) Planning—breaks a goal into structured sequences/subgoals; (4) Reasoning—interprets information, draws conclusions, and chooses actions during planning and execution; (5) Adaptability—updates plans when conditions change (tool failures, external feedback, or mid-course goal changes); (6) Context awareness—retains relevant information from ongoing tasks, past interactions, user preferences, and environment cues.

How does planning work in the transcript’s model of agentic systems?

Planning is described as an iterative two-stage loop: planning first, then execution. Planning itself includes generating multiple candidate plans (e.g., different hiring strategies), evaluating them using criteria like efficiency, tool availability, cost, risk, and alignment with constraints (like remote hiring), and selecting the best plan—optionally with human input. If execution reveals a step can’t be done, the system returns to planning to generate an updated plan.

What are the five core components of an agentic AI application?

The transcript lists: (1) Brain—often an LLM that interprets goals, supports planning/reasoning, selects tools, and handles communication; (2) Orchestrator—executes the plan step-by-step, handles conditional routing, retries, loops, and delegation to tools or humans; (3) Tools—APIs/actions for external interaction (posting jobs, parsing resumes, scheduling, sending emails, and RAG knowledge access); (4) Memory—short-term session state and long-term goals/preferences/policies/state tracking; (5) Supervisor—implements human approvals, escalations, and guardrail enforcement for high-risk or policy-sensitive actions.

Review Questions

  1. What specific behaviors in the recruiting example demonstrate autonomy, and which steps require permission or supervision?
  2. Explain how planning differs from execution in the transcript’s agent model, and why planning is described as iterative.
  3. List the six characteristics of agentic AI and give one example from the HR scenario for at least three of them.

Key Points

  1. 1

    Agentic AI pursues a user’s goal with proactive planning and execution, unlike reactive chatbots that answer one prompt at a time.

  2. 2

    A goal-driven agent can run multi-step workflows end-to-end—drafting documents, posting jobs, monitoring results, shortlisting candidates, scheduling interviews, and triggering onboarding.

  3. 3

    Autonomy must be controlled using permission scoping, human-in-the-loop checkpoints, pause/override capabilities, and guardrails/policies to reduce risk and bias.

  4. 4

    Agentic systems are identified by six traits: autonomy, goal orientation, planning, reasoning, adaptability, and context awareness.

  5. 5

    Planning is modeled as generating multiple candidate plans, evaluating them with criteria (efficiency, tool availability, cost, risk, constraints), selecting one, and iterating if execution fails.

  6. 6

    A typical agentic architecture uses five components: brain (often an LLM), orchestrator, tools, memory, and supervisor for approvals and escalation.

Highlights

Agentic AI treats the user’s goal like a persistent objective and works toward it proactively, rather than waiting for each intermediate question.
In the HR example, weak application volume triggers strategy changes (JD revision and permission-based LinkedIn ads), showing adaptability tied to real feedback.
Autonomy is powerful but dangerous without controls—permission scopes, human-in-the-loop approvals, overrides, and guardrails are presented as essential safety mechanisms.
The transcript frames agentic behavior as a planning-and-execution loop that can iterate when steps become impossible or conditions shift.

Topics

Mentioned