Get AI summaries of any video or article — Sign up free
How to Build Super Effective AI AGENTS - FULL TUTORIAL | Cursor - OpenAI thumbnail

How to Build Super Effective AI AGENTS - FULL TUTORIAL | Cursor - OpenAI

All About AI·
5 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The pipeline triggers on a new entry in an incoming email JSON file and processes each request once using an ID/last-processed mechanism.

Briefing

A practical AI-agent pipeline for handling customer emails end-to-end is built in Cursor: it ingests an incoming message, extracts key fields with structured outputs, consults a knowledge base for availability and pricing, drafts a tailored reply, and sends it via Mailgun—while tracking processed emails to avoid duplicate work. The payoff is speed and consistency: the system can respond immediately to common scheduling requests, reducing manual back-and-forth while keeping replies grounded in stored data like appointment availability and price rules.

The workflow starts with a pipeline trigger tied to an “incoming email” JSON file. When a new email appears, the agent loads the email content into context and uses an OpenAI call (initially targeting the GPD 40 model in the prompt text, then switching to o1 and later CLA 3.5 Sonnet for implementation) to determine the customer’s intent. Structured outputs are used to extract specific fields—most importantly the sender’s email address, the requested date, and later the customer’s name—then save them into a data.json file so the reply can be addressed correctly.

Next comes the agent’s action plan. After intent and extracted fields are ready, the agent generates a step-by-step plan that selects tools needed to fulfill the request. In this tutorial setup, the tools include reading a knowledge base (simulated with a markdown file for the sample) and reading a schedule.json file for appointment availability. The plan also includes a “send email” tool that uses Mailgun to deliver the response. Tool outputs are collected back into context, and the agent generates an “educated” customer response designed to reduce hallucinations by grounding the message in retrieved schedule and pricing information.

The build process is iterative and includes real debugging. The first run fails with a 401 Unauthorized error when sending via Mailgun, traced to an incorrect Mailgun API key. Another early issue is missing extracted fields—first the email address, then later the customer name—both fixed by tightening the extraction instructions in the system prompt (specifically extracting from the “from” field and adding logic to pull a name from a signature). A final formatting tweak ensures the email signs off properly (“Best regards” followed by the agent name).

Once the pipeline works for one request, it’s tested with a second incoming email. The system relies on a “last processed ID” mechanism: previously processed emails are skipped, and only new requests trigger the pipeline. In the example, a May 12 request receives a “no availability” response plus alternate-date options and pricing, while a May 11 request is handled separately with updated extracted details (including discount questions).

The tutorial closes by positioning this as a reusable business template: for standard customer inquiries—especially scheduling and pricing—an agentic pipeline can deliver immediate first responses, with the knowledge base and tool set acting as the main boundary on what the system can do. LLM-as-judge (for human escalation when quality is uncertain) is mentioned as a future enhancement, but not implemented in this run.

Cornell Notes

The pipeline processes customer emails using an agentic workflow: it watches an incoming email JSON file, extracts intent and structured fields (sender email, requested date, and later customer name) into data.json, then generates a step-by-step plan to call tools. Those tools read a knowledge base (simulated) and a schedule.json for availability, and use a Mailgun “send email” function to reply with grounded pricing and scheduling options. The system tracks processed emails via an ID so repeated runs don’t resend the same response. Debugging focused on two practical issues: Mailgun 401 Unauthorized errors from a bad API key and missing extracted fields from the prompt’s extraction instructions. The result is a consistent, fast email responder for common scheduling requests.

How does the system decide what to do after a new email arrives?

It first loads the incoming email content into context and runs an OpenAI call to classify the customer’s intent (e.g., scheduling an appointment). Then it uses structured outputs to extract fields needed for execution—initially the sender email and requested date, later also the customer name. With intent and extracted fields in hand, it generates a step-by-step plan that explicitly lists which tools to call (knowledge base/schedule reads and the Mailgun send tool) and in what order.

Why are structured outputs and saving to data.json central to the pipeline?

Structured outputs make the agent’s results deterministic enough to drive downstream actions. The tutorial repeatedly fixes failures where the reply couldn’t be addressed because the extraction missed required fields. Once the prompt is adjusted to extract the sender address from the “from” field and to capture the customer name from a signature, those values land in data.json and can be reliably used when composing the email and selecting the recipient.

What role do tools play in reducing hallucinations?

Tools ground the response in retrieved facts. The plan calls a knowledge base reader (company secrets/pricing rules, simulated with a markdown file) and a schedule reader (schedule.json) to check whether the requested date has available slots. The agent then composes the reply using those tool outputs, so pricing and availability claims come from stored data rather than free-form generation.

What caused the initial email-sending failures, and how were they resolved?

The first send attempt returned a 401 Unauthorized error from Mailgun. The fix was updating the Mailgun API key in the environment variables. After correcting credentials, the pipeline successfully sent emails and marked the incoming email as processed.

How does the system avoid reprocessing the same email?

It uses an ID-based mechanism. After a run, the incoming email record is marked processed (the tutorial references a “last processed ID”). When a new email is added, the pipeline skips earlier IDs and only triggers on the new request, preventing duplicate replies.

What improvements were made to make replies more personalized and complete?

Two prompt-driven refinements: (1) extracting the customer name so the greeting can be “Hello [Name]” instead of “Dear customer,” and (2) adding a correct sign-off format so the email ends with “Best regards” and the agent name. These changes were validated by rerunning the pipeline and checking the received email content.

Review Questions

  1. What specific fields must be extracted into data.json for the email reply to work, and what happens when one is missing?
  2. How do schedule.json and the knowledge base (company secrets) influence the content of the generated customer response?
  3. Where does the 401 Unauthorized error originate in this setup, and what configuration change resolves it?

Key Points

  1. 1

    The pipeline triggers on a new entry in an incoming email JSON file and processes each request once using an ID/last-processed mechanism.

  2. 2

    OpenAI is used to classify intent and extract structured fields via structured outputs, which are saved into data.json for later steps.

  3. 3

    A step-by-step plan generated after extraction determines which tools to call, including knowledge-base reads, schedule availability checks, and Mailgun email sending.

  4. 4

    Grounding responses in tool outputs (schedule and pricing rules) helps keep replies consistent and reduces unsupported claims.

  5. 5

    Mailgun integration requires correct environment configuration; a wrong API key produces a 401 Unauthorized error.

  6. 6

    Prompt refinements are necessary to reliably extract the sender email from the “from” field and to capture a customer name from signatures for personalization.

  7. 7

    Personalization and formatting (greetings and sign-offs) can be tuned by adjusting the system prompt and then re-running the pipeline to verify the received email.

Highlights

The system turns scheduling emails into a deterministic workflow: intent classification → structured extraction → tool-based availability/pricing lookup → Mailgun send.
Two early breakpoints—missing extracted fields and Mailgun 401 Unauthorized—were fixed by tightening extraction instructions and correcting the Mailgun API key.
Processed-email IDs prevent duplicate replies, enabling repeated runs while only handling genuinely new customer requests.
The tutorial demonstrates how tool grounding (schedule.json + pricing knowledge base) can make generated responses more reliable than free-form drafting.
Personalization improved through prompt changes that extract the customer’s name and enforce a consistent sign-off format.

Mentioned

  • LLM
  • RAG
  • GPD
  • o1
  • CLA