What is Agentic AI? | Agentic AI using LangGraph | Video 2 | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Agentic AI pursues a user’s goal with proactive planning and execution, unlike reactive chatbots that answer one prompt at a time.
Briefing
Agentic AI is a software paradigm built to take a user’s goal and run toward it with minimal human input—planning, executing steps, adapting when conditions change, and doing so proactively rather than waiting for prompts. In contrast to reactive generative AI chatbots (which answer one question at a time), an agentic system treats the goal like a persistent objective and independently carries out the multi-step work required to reach it.
The clearest way the transcript frames this difference is through two scenarios. In a reactive setup, a person asking about a Goa trip would get step-by-step answers only after each question—dates, transport, hotels, then activities—because the system responds to each prompt in isolation. In an agentic setup, the user provides the overall goal (travel to Goa between two dates), and the system proactively figures out the best route, recommends hotels, and builds an itinerary without repeatedly asking the user for each intermediate decision.
That goal-driven behavior becomes concrete in an HR recruiting example. A recruiter wants to hire a backend engineer with 2–4 years of experience for remote work. The agentic AI chatbot receives the goal, then creates a plan: draft a job description (JD), post it to job platforms (the transcript names LinkedIn), monitor applications, and adapt if the response is weak. If only two candidates apply, it revises the JD (e.g., changing “backend engineer” to “full stack engineer”) and seeks permission to run LinkedIn ads to boost reach. Once applications rise, it parses resumes, shortlists candidates by fit, schedules interviews by checking the recruiter’s calendar, drafts and sends interview emails, and—after the recruiter selects a candidate—generates an offer letter, sends it, monitors acceptance, and triggers onboarding steps like requesting IT access and provisioning a laptop. Human involvement appears mainly at permission points or high-risk moments.
From there, the transcript lays out six core characteristics used to identify whether a system is truly agentic: autonomy (making decisions and taking actions without step-by-step instructions), goal orientation (operating with a persistent objective), planning (breaking a goal into structured sequences and subgoals), reasoning (interpreting information, drawing conclusions, and choosing actions during both planning and execution), adaptability (modifying plans when tools fail, feedback changes, or goals shift), and context awareness (retaining relevant information across a multi-step workflow). It also emphasizes that autonomy must be controllable—through permission scoping, “human-in-the-loop” checkpoints, pause/override controls, and guardrails/policies—because uncontrolled agents can cause costly or biased outcomes.
Finally, the transcript describes how agentic systems are typically built using five high-level components: a “brain” (often an LLM that interprets goals, supports planning/reasoning, selects tools, and handles communication), an orchestrator (sequencing and routing steps, handling retries and loops), tools (APIs and actions like posting jobs, parsing resumes, scheduling, and RAG knowledge access), memory (short-term session state and long-term goals/preferences/policies), and a supervisor (implementing human approvals and escalation/guardrail enforcement). The takeaway is that agentic AI isn’t just smarter chat—it’s an organized system for goal pursuit, with planning-and-execution loops, operational control, and stateful context across time.
Cornell Notes
Agentic AI is designed to take a user’s goal and pursue it with minimal human input. Instead of reacting to each question like a typical chatbot, it operates proactively: it plans a multi-step approach, executes those steps, reasons through decisions, adapts when results or tools change, and keeps track of relevant context across the workflow. The transcript uses an HR recruiting example where an agent drafts a job description, posts it to LinkedIn, monitors applications, revises strategy (including ads with permission), shortlists candidates via resume parsing, schedules interviews via calendar tools, sends offer letters, and triggers onboarding after acceptance. To classify a system as agentic, it should show six traits: autonomy, goal orientation, planning, reasoning, adaptability, and context awareness. It also stresses that autonomy must be controlled using permission scopes, human-in-the-loop checkpoints, overrides, and guardrails.
How does agentic AI differ from reactive generative AI chatbots?
What does the HR recruiting scenario reveal about an agent’s autonomy?
Why is “autonomy” not enough—what controls prevent an agent from going off the rails?
What are the six characteristics used to identify an agentic AI system?
How does planning work in the transcript’s model of agentic systems?
What are the five core components of an agentic AI application?
Review Questions
- What specific behaviors in the recruiting example demonstrate autonomy, and which steps require permission or supervision?
- Explain how planning differs from execution in the transcript’s agent model, and why planning is described as iterative.
- List the six characteristics of agentic AI and give one example from the HR scenario for at least three of them.
Key Points
- 1
Agentic AI pursues a user’s goal with proactive planning and execution, unlike reactive chatbots that answer one prompt at a time.
- 2
A goal-driven agent can run multi-step workflows end-to-end—drafting documents, posting jobs, monitoring results, shortlisting candidates, scheduling interviews, and triggering onboarding.
- 3
Autonomy must be controlled using permission scoping, human-in-the-loop checkpoints, pause/override capabilities, and guardrails/policies to reduce risk and bias.
- 4
Agentic systems are identified by six traits: autonomy, goal orientation, planning, reasoning, adaptability, and context awareness.
- 5
Planning is modeled as generating multiple candidate plans, evaluating them with criteria (efficiency, tool availability, cost, risk, constraints), selecting one, and iterating if execution fails.
- 6
A typical agentic architecture uses five components: brain (often an LLM), orchestrator, tools, memory, and supervisor for approvals and escalation.