Get AI summaries of any video or article — Sign up free
Build AI Agent Application with Agent Development Kit (ADK) | Get Started with Google's Agent SDK thumbnail

Build AI Agent Application with Agent Development Kit (ADK) | Get Started with Google's Agent SDK

Venelin Valkov·
6 min read

Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ADK structures agentic apps around a root/coordinator agent that orchestrates transfers to specialized sub-agents.

Briefing

Google’s Agent Development Kit (ADK) is positioned as a practical way to build agentic applications with a clear workflow structure, built-in deployment options, and evaluation tooling—demonstrated through a working “diet coach” agent that turns a user’s ingredients and dietary goals into recipe recommendations with computed nutrition macros.

In the live demo, a web chat session (“chef’s kiss”) collects two inputs: the ingredients available in the kitchen and the dietary targets (example: high protein and low calories). The conversation then hands off to a coordinator agent that routes the request to specialized sub-agents. One agent extracts the user-provided ingredients and goals; another agent searches a local recipe set using a custom tool (“find recipes”), allowing up to two missing ingredients. The system returns a shortlist of matching recipes—such as “quasi grilled cheese,” “chicken cadidilia,” and “apple slices with peanut butter”—and then triggers a second custom tool (“calculate recipe macros”) for each candidate.

Those tool calls compute nutrition totals—calories, protein, carbs, and fat—by scaling ingredient-level macro data stored in JSON files. With the computed macros in hand, the recommender agent selects the best fit for the stated goals. In the example, “chicken cadidilia” stands out with roughly 50g of protein, aligning with the high-protein, low-calorie preference. The UI also exposes the internal execution trace: agent-to-agent transfers, tool invocations, and intermediate results, making it easier to debug what the agent is doing rather than treating it as a black box.

Beyond the demo, ADK is presented as more than a prompt-and-chat wrapper. The library supports agentic workflows with sequential, parallel, and looping execution patterns, and it encourages composing multiple specialized agents in a hierarchy. Google’s tooling also includes a Google Search tool (not used in the demo) and deployment pathways such as Docker deployment and an intended integration with Vertex AI. A key differentiator highlighted is built-in evaluation support: instead of relying on online demos, teams can create evaluation sets and inspect how agentic applications behave under realistic conditions.

The walkthrough then shifts from UI to code. The project uses UV for dependency management and shows how to install ADK (via pip install Google ADK). It loads two JSON “databases”: one with ingredient macros per 100g and another with recipe ingredient lists and gram weights. The agent logic is split into three roles: a user info agent (collects ingredients and goals), a recipe finder/recommender agent (calls tools to find recipes and compute macros), and a coordinator/root agent that orchestrates transfers. Tool functions are plain Python functions with typed arguments and docstrings that ADK parses for tool definitions. Running the app starts a local server (ADK web on port 8000), and the execution trace in the UI mirrors the underlying agent calls.

Finally, the tutorial notes practical compatibility issues: attempts to use other models via Ollama ran into tool-calling failures, while the flow worked with Gemini 2.0 Flash. Overall, ADK is framed as a structured, debuggable foundation for agentic apps—especially when paired with Gemini models—while still leaving open questions about how smoothly it supports non-Google/open-model backends at the time of release.

Cornell Notes

Google’s Agent Development Kit (ADK) provides a structured way to build agentic applications with multiple specialized agents, tool-calling, and workflow orchestration. The tutorial demonstrates a “diet coach” agent: it collects available ingredients and dietary goals, finds matching recipes (allowing up to two missing ingredients), computes macros (calories, protein, carbs, fat) from ingredient-level data, and recommends the best option for the user’s targets. ADK’s web UI exposes internal agent transfers and tool calls, making debugging and traceability straightforward. The code uses JSON files as lightweight data sources and defines tools as typed Python functions whose docstrings are parsed by ADK. The author reports that the end-to-end flow worked with Gemini 2.0 Flash, while some Ollama-based model attempts failed during tool calling.

How does the diet coach agent turn user inputs into a ranked recipe recommendation?

It starts by collecting two inputs: a list of available ingredients and dietary goals (e.g., high protein, low calories). A coordinator/root agent routes the request to a user info agent for extraction, then to a recipe finder/recommender agent. That agent calls a custom tool, “find recipes,” which scans a recipe database and returns matches while permitting up to two missing ingredients. For each candidate recipe, it calls “calculate recipe macros,” which scales ingredient macros (from an ingredient-per-100g JSON dataset) by the grams specified in each recipe. With macros computed for each option, the recommender agent selects the recipe that best matches the dietary goals.

What role do custom tools play in ADK, and how are they defined?

Custom tools are the mechanism that lets agents perform deterministic operations instead of relying only on text generation. In the example, “find recipes” filters recipes based on ingredient overlap and a max-missing parameter, while “calculate recipe macros” computes calories and macronutrients by scaling ingredient-level data. Tools are implemented as plain Python functions with typed arguments and return values; ADK parses the function docstring to understand the tool’s purpose and interface. The tool-calling results then feed back into the agent’s final recommendation.

Why does the UI trace matter for building agentic systems?

The ADK web interface shows agent-to-agent transfers and the exact tool calls made during execution. In the demo, the coordinator agent hands off to the user info agent, then transfers to the recipe recommender agent. The trace also displays the inputs passed to “find recipes,” the list of returned recipes, and the subsequent “calculate recipe macros” calls (including per-recipe macro outputs). This transparency helps developers verify that the agent is using the intended tools and data rather than hallucinating intermediate steps.

What workflow patterns does ADK support beyond simple chat?

ADK is described as supporting agentic workflows with sequential, parallel, and looping execution. Sequential execution chains steps so multiple agents can perform required actions in order. Parallel execution allows multiple operations or tool calls to run concurrently. Looping execution enables iterative refinement or repeated passes over agents. The tutorial also emphasizes composing specialized agents in a hierarchy, with a root/coordinator agent orchestrating the overall flow.

How does the project compute nutrition macros from ingredient and recipe data?

The project uses two JSON datasets. One JSON stores ingredient macros per 100g (protein, carbs, fat, calories). The other JSON lists recipes with ingredient names and gram weights. When “calculate recipe macros” runs, it iterates through each recipe’s ingredients, scales each ingredient’s per-100g macros by (ingredient_grams / 100), and sums the scaled values to produce totals for the recipe. If a required ingredient macro is missing, the tool can return an unknown value and prints a warning.

What deployment and evaluation capabilities are highlighted for ADK?

The tutorial highlights deployment options such as Docker deployment and an intended path toward Vertex AI engine usage. It also stresses evaluation tooling: real-world agentic apps require evaluation sets to measure realistic behavior, not just polished demos. ADK’s evaluation tools are presented as a way to inspect what happens under the hood when agentic applications run, helping teams test and improve reliability before deployment.

Review Questions

  1. What are the three agents in the example, and what distinct responsibilities does each one handle?
  2. How do the “find recipes” and “calculate recipe macros” tools interact to produce a final recommendation?
  3. What kinds of workflow execution patterns (sequential/parallel/looping) does ADK support, and why are they useful for agentic applications?

Key Points

  1. 1

    ADK structures agentic apps around a root/coordinator agent that orchestrates transfers to specialized sub-agents.

  2. 2

    A practical diet coach example uses tool-calling to deterministically find recipes and compute nutrition macros from JSON datasets.

  3. 3

    Recipe matching allows a configurable number of missing ingredients (max missing), enabling recommendations even with incomplete kitchens.

  4. 4

    The ADK web UI provides an execution trace showing agent transfers and tool calls, which supports debugging and transparency.

  5. 5

    ADK supports workflow patterns such as sequential, parallel, and looping execution, plus hierarchical composition of multiple agents.

  6. 6

    Built-in evaluation tooling is emphasized as essential for realistic testing beyond demo scenarios.

  7. 7

    The end-to-end tutorial flow worked with Gemini 2.0 Flash, while some Ollama-based model attempts failed during tool calling.

Highlights

The demo’s recommendation pipeline is fully tool-driven: recipe filtering happens in “find recipes,” and nutrition totals come from “calculate recipe macros” using ingredient-per-100g scaling.
ADK’s UI exposes internal execution—agent-to-agent handoffs and each tool invocation—so developers can verify intermediate reasoning steps.
Workflow orchestration goes beyond chat by supporting sequential, parallel, and looping agent execution patterns.
Evaluation tooling is treated as a first-class need for real deployments, not an afterthought.

Topics

Mentioned