Get AI summaries of any video or article — Sign up free
Google Launches an Agent SDK - Agent Development Kit thumbnail

Google Launches an Agent SDK - Agent Development Kit

Sam Witteveen·
4 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Google’s Agent Development Kit is positioned as cloud-deployable agent infrastructure, with evaluation built in rather than added later.

Briefing

Google has launched an “Agent Development Kit” (Agent SDK) aimed at building deployable AI agents in the cloud, with built-in support for evaluation, tool integration, and multi-agent architectures. The push matters because agent frameworks have proliferated across the industry, but most still require extra work to make them production-ready—Google’s pitch is that deployment readiness is part of the foundation rather than an afterthought.

A key theme is operational readiness from day one. Instead of centering on local-only experimentation, the kit is designed to run remotely in cloud environments, and it includes mechanisms for evaluation alongside core agent capabilities. That emphasis on testing and deployment aligns with how teams typically adopt agent systems: they need repeatable runs, measurable performance, and a path to production rather than just demos.

The SDK also leans heavily into tool use and interoperability. From the start, it supports integrating tools from other ecosystems such as LangChain, and it includes “function calling” style built-in tools. Google Cloud’s existing tool ecosystem is positioned as a strength, with support for MCP tools and OpenAPI tools mentioned as part of the initial setup. There’s also an authentication system for tool access, and the framework is described as event-driven—an architectural choice that can make agent workflows more modular and responsive.

Another differentiator is multi-agent design. The kit is built around a multi-agent architecture from the outset, rather than treating multiple agents as an add-on. The GitHub documentation also points to core agent primitives such as state and memory, plus handling “artifacts,” suggesting the framework is meant to support more than simple chat-style interactions.

While the release is early—sample agents appear to return a GitHub 404 at the time of recording—Google has already published installation instructions and API references. The SDK is currently Python-only, which may disappoint developers looking for JavaScript support. Model support is also broader than a single vendor: the kit references Gemini models but also indicates compatibility with OpenAI models and Claude Sonnet through a “light LLM integration.” That matters because it reduces lock-in and lets teams compare agent behavior across model families.

The most consequential near-term question is how well the framework integrates with Google’s Gemini lineup, including Gemini 2.5 Pro. If Gemini 2.5 Pro training data incorporates agent-framework patterns, the result could be agents that work more effectively out of the box—an advantage similar to how teams benefit when model behavior is tuned to the surrounding tooling. The transcript frames this as an early-stage rollout, with room for strengths and weaknesses to emerge as developers build real agents.

For hands-on testing, the kit is available at github.com/google/adken python, and further coverage is expected as more videos and experiments roll out alongside other Google Cloud Next announcements, including new TPUs and an additional framework for agentic agent communication.

Cornell Notes

Google’s Agent Development Kit (Agent SDK) is designed to help developers build AI agents that are ready for cloud deployment, not just local experiments. It emphasizes evaluation, tool integration (including function-calling style tools), authentication for tool access, and an event-driven architecture. The framework also supports multi-agent architectures from the start and includes primitives like state, memory, and artifact handling. Although it’s Python-only at launch, it’s not limited to Gemini models—documentation indicates compatibility with OpenAI models and Claude Sonnet via a light LLM integration. The key open question is how tightly the SDK will work with Gemini 2.5 Pro and whether model training incorporates agent-framework patterns for stronger out-of-the-box performance.

What makes Google’s Agent Development Kit different from earlier agent frameworks?

It’s built with deployment readiness in mind from the start—intended to run remotely in cloud environments rather than only on local machines. It also includes evaluation-oriented components, plus architectural features like event-driven execution, tool authentication, and multi-agent support as core design elements rather than add-ons.

How does the SDK handle tools and interoperability with other agent ecosystems?

It supports built-in tools with function calling and also allows integration of tools from other frameworks such as LangChain. Google Cloud’s existing tool ecosystem is referenced as part of the initial approach, including MCP tools and OpenAPI tools. There’s also an authentication system for tool access, which helps control permissions when agents call external services.

What architectural features are highlighted beyond single-agent chat behavior?

The kit is described as focusing on multi-agent architecture from the beginning. It also references state and memory capabilities, and it mentions dealing with artifacts—suggesting support for longer-running workflows and outputs beyond plain text responses.

Which programming and model ecosystems does the kit target at launch?

The transcript indicates the SDK is Python-only, with no confirmed JavaScript version yet. For models, it’s not limited to Gemini; documentation suggests support for OpenAI models and Claude Sonnet via a “light LLM integration,” alongside Gemini models.

Why does Gemini 2.5 Pro integration matter for developers?

If Gemini 2.5 Pro training data incorporates patterns from agent frameworks, agents built with this SDK could perform better out of the box. The transcript frames this as a potential advantage similar to how customizing models to match an agent framework can improve agent effectiveness.

Review Questions

  1. What deployment and evaluation features does the Agent Development Kit prioritize, and why are those important for real-world agent adoption?
  2. How does the SDK’s tool integration approach (function calling, MCP/OpenAPI, authentication) affect what kinds of agents developers can build?
  3. What does multi-agent architecture change compared with single-agent designs, and which SDK components (state, memory, artifacts) support that?

Key Points

  1. 1

    Google’s Agent Development Kit is positioned as cloud-deployable agent infrastructure, with evaluation built in rather than added later.

  2. 2

    The framework emphasizes tool use, including function-calling style tools and interoperability with LangChain.

  3. 3

    Google Cloud tool ecosystems are referenced as first-class inputs, including MCP tools and OpenAPI tools, plus tool authentication.

  4. 4

    A multi-agent architecture is treated as a core design goal, alongside state, memory, and artifact handling.

  5. 5

    The initial release appears early (sample agents returning a GitHub 404 at the time of recording) and is Python-only.

  6. 6

    Model support is broader than Gemini alone, with indications of OpenAI models and Claude Sonnet compatibility via a light LLM integration.

  7. 7

    The biggest near-term test is how well the SDK works with Gemini 2.5 Pro and whether training data incorporates agent-framework patterns for stronger out-of-the-box performance.

Highlights

The kit is built for deployment readiness from the start, aiming to run agents remotely in the cloud with evaluation support.
Tool integration goes beyond basics, referencing function calling, MCP and OpenAPI tools, and authentication for tool access.
Multi-agent architecture, plus state/memory/artifact handling, is presented as foundational rather than optional.
Compatibility signals extend beyond Gemini, with OpenAI and Claude Sonnet support via a light LLM integration.

Topics