Get AI summaries of any video or article — Sign up free
AI Expert: Anyone can build Agents, here’s how thumbnail

AI Expert: Anyone can build Agents, here’s how

David Ondrej·
5 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start with automations and the workflow you already understand, then add agent capabilities like content generation and decision-making.

Briefing

AI agents are best approached as a step-by-step automation workflow—start with the process you already do, break it into small chunks, and only then add “cognition” like content generation and real-time decision-making. Xiao Mora, founder of Crew AI, argues that beginners get better results when they stop treating “agents” as a mysterious leap and instead think in terms of automations they can trust: generate text, decide between options (e.g., whether to search or scrape), and react to data without needing to hard-code every if/then rule.

A practical starting point is to identify repetitive, low-stakes tasks and automate them incrementally. Mora describes how his first “career agent” was LinkedIn posting: he wanted outcomes like recognition and opportunities but lacked time to craft posts manually. The key wasn’t to build something huge immediately; it was to start with something easy to automate and expand from there. He also emphasizes that the process should be mapped manually first—research, note-taking, and personal experience—before translating that workflow into an agent. That mapping clarifies what the system needs to do and makes the eventual agent design far more straightforward.

The biggest technical and product lesson is that agent performance depends on prompt quality, task clarity, and decomposition. Mora warns that large language models have limited context windows and can lose track of “needle in the haystack” information. Breaking work into smaller steps reduces confusion and enables multiple agents to cross-check each other, producing more reliable outcomes than a single agent trying to juggle many tools at once. Prompt engineering matters here: Crew AI’s design choices aim to make good prompting easier by forcing clearer task descriptions and explicit expected outputs.

Mora also frames multi-agent systems as powerful but risky if handled carelessly. Multiple agents can amplify both improvements and harm—similar to iterative refinement in chat, where a few back-and-forth messages can “click” into a better result. Without careful constraints, ambiguity can introduce randomness and hallucinations, especially when several agents interact over time.

On adoption, Mora says mass use is less blocked by model capability than by education and tooling. He compares the current moment to blockchain’s early era: the technology exists, but nontechnical people need guidance and a friendlier on-ramp. He argues that UI and learning pathways will be decisive, and Crew AI is building toward that with a UI and an enterprise offering called Crew AI Plus that can turn developed crews into deployable APIs with minimal setup.

Finally, the conversation ties agent building to long-term career resilience. Mora rejects the idea that people should wait for AGI and do nothing in the meantime. He argues that learning these tools is a “lever” individuals can control, even if they can’t control broader economic shifts. He predicts AGI could arrive within about five years (optimistically around 2029) and says the best preparation is to become familiar with AI tools now—because even if AI automates more programming work, understanding concepts and being able to communicate requirements will remain valuable. Crew AI’s broader goal is to make agent building and deployment feel as routine as using a platform—so more people can build, run, test, and integrate agents into real workflows.

Cornell Notes

Xiao Mora recommends treating “AI agents” as advanced automations rather than a mysterious leap. Beginners should start by mapping and doing a task manually, then decomposing it into small steps and only afterward adding agent capabilities like content generation and real-time decisions (e.g., whether to search or scrape). Clear prompts and explicit expected outputs are crucial because ambiguity increases randomness and hallucinations—especially when multiple agents interact. Mora also argues that mass adoption depends heavily on education and better tooling, not just better models, and Crew AI is pushing toward a UI and easier deployment via Crew AI Plus. Learning these systems now is framed as career leverage as AGI approaches.

Why does Mora advise starting with “process” or “automation” instead of jumping straight into an agent?

He says the word “AI agents” can feel intimidating, so the entry point should be automations people already want. Automations become more powerful when they add two capabilities: (1) content generation (e.g., “write me an email” or “write me a poem”) and (2) a form of cognition/decision-making (choosing between actions like searching online vs. scraping a site). That mindset shift turns agent building into something incremental: automate the small repetitive work first, then evolve it into agent behavior.

What does “map it manually first” mean, and how does it improve agent results?

Before building an agent, Mora manually performs the task—researching, taking notes, reviewing what others do, and using personal experience—while explicitly mapping the workflow. Once the process is clear, translating it into an agent (or multiple agents) becomes easier because the system’s steps and decision points are already understood.

Why is breaking tasks into smaller steps so important for multi-agent systems?

Large language models can lose track in long or cluttered contexts due to limited context windows. Mora describes the “needle in the haystack” problem: if key information sits in the middle, the model may drift. Decomposing work into smaller chunks reduces confusion and lets multiple agents “bounce” off each other—checking and correcting—often yielding better outcomes than one agent trying to use many tools at once.

How does prompt clarity affect reliability when multiple agents are involved?

Mora stresses that vague or ambiguous instructions invite randomness. With multiple agents, that randomness can compound over time, increasing the chance of hallucinations or harmful outputs. The Crew AI approach includes design choices that push users to define tasks clearly and specify expected outputs, aiming to reduce the ambiguity that causes failures.

What’s the biggest barrier to mainstream agent adoption, according to Mora?

He argues it’s education and time, not just model quality. Many people talk about agents, fewer read about them, fewer still use them, and only a small fraction deploy them in production. He also says a better UI would help nontechnical users try agents sooner—similar to how ChatGPT’s chat interface drove adoption compared with earlier model availability.

How does Mora respond to the fear that agents will replace jobs?

He says individuals can’t control whether automation eliminates roles, but they can control how prepared they are. Learning the tools early is framed as career leverage: by the time job shifts happen, people who understand the systems can find space in the market. He also argues that doing nothing is riskier than learning, since acquiring skills generally improves options.

Review Questions

  1. What practical steps should a beginner take before building an agent, and why does that sequence matter?
  2. How do context limits and task decomposition influence agent reliability?
  3. What combination of factors does Mora cite for slow adoption, and what role does UI play?

Key Points

  1. 1

    Start with automations and the workflow you already understand, then add agent capabilities like content generation and decision-making.

  2. 2

    Map the task manually first to identify steps and decision points before translating them into an agent design.

  3. 3

    Decompose work into smaller chunks to avoid context-window confusion and to enable multi-agent cross-checking.

  4. 4

    Write prompts with explicit expected outputs; ambiguity increases randomness and can worsen outcomes when multiple agents interact.

  5. 5

    Treat multi-agent teams as powerful but requiring constraints, since they can both improve results and amplify errors.

  6. 6

    Mainstream adoption is constrained by education and usability; a UI and clearer onboarding can matter as much as model improvements.

  7. 7

    Career preparation should focus on learning these tools now, since individuals can’t control macro job shifts but can control readiness and leverage.

Highlights

Mora’s core advice: don’t start by building an agent—start by building the process as an automation, then add cognition and generation.
Breaking tasks into smaller steps reduces “needle in the haystack” failures and lets multiple agents validate each other for better outcomes.
Crew AI’s product philosophy emphasizes clearer task definitions and expected outputs to reduce ambiguity-driven randomness.
Adoption bottlenecks are largely educational and tooling-related; UI is positioned as the missing ingredient for mass use.
Learning agents now is framed as career leverage, not a gamble against AGI—skills gained today remain valuable even as automation grows.

Mentioned