Get AI summaries of any video or article — Sign up free
Gemini 1.5 Pro for Code - Part 01 thumbnail

Gemini 1.5 Pro for Code - Part 01

Sam Witteveen·
5 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Gemini can ingest a repository and generate runnable crewAI multi-agent code using the repo’s structure and docs as context.

Briefing

Gemini 1.5 Pro for Code can ingest a real GitHub-style repository, then generate working multi-agent Python code that interacts with the repo’s structure—first using OpenAI by default, then switching to Gemini models, and finally adding external tools like DuckDuckGo search. The practical takeaway is that code-focused prompting plus repository context can produce end-to-end prototypes (agents, tasks, and tool calls) with relatively little manual wiring, even when some integration details still need human correction.

The workflow starts by uploading the crewAI repository and selecting the most relevant parts: the source code and the documentation markdown files, while skipping tests. Roughly 35,000–37,000 tokens are included, and the prompt asks Gemini to summarize what crewAI does and what it’s built with. Gemini identifies the stack as Python-based and points to Pydantic and LangChain, along with OpenAI as the default LLM integration. It also produces a concrete list of pip packages needed for a Colab setup (including crewAI and LangChain-related dependencies), which can be copied directly into a notebook.

With dependencies installed, Gemini generates a simple two-agent bot: one agent acts as a hotel-chain customer seeking value for money, and the other plays a salesperson selling air conditioners. The code uses crewAI’s Agent/Task/Crew constructs and runs quickly because the prompt context stays around the 40k-token range. The resulting interaction includes a dialogue-like exchange where the salesperson asks for specifics (e.g., number of rooms and floors) and the system produces a final set of options and requirements. The run also reveals small quirks—like gender-neutral phrasing when the prompt specifies a “saleswoman.”

Next comes model switching. Gemini is asked to modify the same crewAI code to use a Gemini model via Google’s generative AI integration. The generated version largely keeps the structure but introduces integration mistakes—such as an incorrect package name and import path—requiring manual adjustment to use langchain_google_genai and ChatGoogleGenerativeAI. After fixing those details, the bot runs successfully on Gemini, and the outputs differ from the OpenAI version.

Finally, Gemini is pushed to include tool use: a search agent gathers recent AI-release information via DuckDuckGo, and a second agent rewrites it from an “AI doomer” perspective. The first attempt fails due to missing required fields (notably backstory), but after adding backstories, the tool-enabled pipeline works. It retrieves items including OpenAI’s Sora and Google-related generative AI and personalization news, then produces sensational rewrite text.

Overall, the results suggest a strong pattern: repository source code alone can be enough for Gemini to infer key classes, inputs, and outputs, enabling code understanding and downstream generation (tests, additional agents, and possibly docs—though documentation quality may require more guidance).

Cornell Notes

Gemini 1.5 Pro for Code can take a repository’s source code and docs, then generate runnable crewAI multi-agent Python programs. It first produces a two-agent customer/salesperson bot using crewAI with OpenAI as the default LLM, including task definitions and agent dialogue. When asked to switch from OpenAI to Gemini, it mostly preserves the structure but may output incorrect package/import details that require manual fixes (e.g., using langchain_google_genai and ChatGoogleGenerativeAI). Adding tool use works too: a DuckDuckGo search agent can fetch recent AI-release info, and a second agent can rewrite it in a specified “AI doomer” tone, though missing required fields like backstory can cause runtime errors. This matters because it enables rapid prototyping with less manual scaffolding, while still benefiting from developer oversight.

How does Gemini use repository context to generate code, and what parts of the repo matter most?

It uploads the crewAI repo and focuses on the source code plus the docs markdown files, skipping tests. With roughly 35,000–37,000 tokens included, Gemini can (1) describe what crewAI does, (2) identify the underlying stack (Python, Pydantic, LangChain, and OpenAI by default), and (3) generate pip install commands and working code that uses crewAI’s Agent/Task/Crew constructs.

What does the first generated crewAI example do, and what structure does it use?

The generated bot uses two agents: a hotel-chain customer agent and an air-conditioner salesperson agent. The salesperson agent asks for concrete requirements (like number of rooms and floors), while the customer agent expresses upgrade needs and value constraints. The code is built around crewAI’s Agent, Task, and Crew objects, with tasks assigned to each agent and a final combined output.

What changes when switching from OpenAI to Gemini, and why is manual correction needed?

Gemini can rewrite the code to use a Gemini model, but it may output incorrect dependency names and import statements. In this run, the fix involved using langchain_google_genai and importing ChatGoogleGenerativeAI rather than the initially suggested package/import. After correcting those integration details, the crewAI agents run with Gemini and produce different outputs than the OpenAI version.

How does tool use work in the multi-agent setup, and what tool was used here?

A search agent uses DuckDuckGo to retrieve information about new AI releases. A second agent takes the retrieved text and rewrites it from an “AI doomer” and “AI is evil” perspective. The pipeline demonstrates that crewAI agents can chain tool outputs into subsequent reasoning and generation.

What caused the tool-enabled example to fail initially, and how was it resolved?

The first attempt errored because required agent fields were missing—specifically backstory. Adding backstory for the agents (one generated by Gemini, another inserted manually) allowed the agents to run, perform the search, and then rewrite the results in the requested tone.

What’s the practical lesson about using source code vs. docs for code understanding?

Source code alone can be enough for Gemini to infer key classes, inputs, and outputs in a codebase. Docs can help, but documentation generation quality may be uneven unless prompts provide additional context about what the code is doing. The strongest results come from combining repository context with targeted instructions.

Review Questions

  1. When Gemini switches LLM providers, which integration details are most likely to require developer correction (packages, imports, or prompt structure)?
  2. In the two-agent air-conditioner example, which task prompts drive the salesperson to ask for specific hotel requirements?
  3. For the DuckDuckGo tool pipeline, what minimum agent information must be present to avoid runtime errors (e.g., backstory), and why does that matter?

Key Points

  1. 1

    Gemini can ingest a repository and generate runnable crewAI multi-agent code using the repo’s structure and docs as context.

  2. 2

    A two-agent customer/salesperson workflow can be produced with crewAI’s Agent/Task/Crew abstractions and executed with minimal manual changes.

  3. 3

    Switching from OpenAI to Gemini often requires fixing dependency names and import paths (e.g., using langchain_google_genai and ChatGoogleGenerativeAI).

  4. 4

    Tool-enabled agent chains can fetch external information via DuckDuckGo and feed it into a second agent for rewriting.

  5. 5

    Missing required agent fields like backstory can break execution, so generated code still needs validation.

  6. 6

    Source code-only context can help Gemini identify key classes and I/O patterns, but doc generation may need extra guidance to be reliable.

Highlights

Gemini generated a working crewAI two-agent bot (customer + air-conditioner salesperson) after installing dependencies it listed from the repository context.
Model switching to Gemini required manual correction of package/import details even when the overall code structure was mostly right.
A tool-enabled pipeline succeeded after adding missing backstory: DuckDuckGo search results were rewritten into an “AI doomer” narrative.

Topics

  • Gemini 1.5 Pro for Code
  • crewAI multi-agent bots
  • LangChain integration
  • DuckDuckGo tool use
  • Colab setup

Mentioned