Get AI summaries of any video or article — Sign up free
Pydantic AI Tutorial: Build Agents to Analyze Mobile App Reviews in Python thumbnail

Pydantic AI Tutorial: Build Agents to Analyze Mobile App Reviews in Python

Venelin Valkov·
5 min read

Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use one SQL-backed fetch reviews tool as the shared capability across all agents, and vary only its query parameters to change what each agent learns from the same dataset.

Briefing

A practical agent workflow can turn stored mobile app reviews into a structured product brief—complete with improvement themes, marketing-ready messaging, and an MVP feature list—by combining a single SQL-backed “fetch reviews” tool with a three-agent team built on Pydantic AI. The key payoff is reliability: outputs are validated into typed, structured objects, so downstream steps (like planning an MVP) can consume consistent fields instead of messy free text.

The build starts with one tool that queries an SQL database of app reviews. The tool takes parameters such as minimum rating, maximum rating, maximum number of reviews, and a minimum word count, then returns a list of review records (package name, review text, rating). That function becomes the only external capability the agents need. Pydantic AI’s dependency injection is used to pass user context—specifically an app description—into the system, keeping the agents focused on analysis rather than plumbing.

Three agents then split the work. The “Improvement agent” retrieves reviews using one set of filters (for example, lower-to-mid ratings) and produces two structured lists: prioritized issues and prioritized feature requests. Each entry includes both the item itself (as a list of strings) and a description that adds concrete examples—such as problems with account recovery after device changes, difficulty transferring premium features, high subscription costs, vague interaction gaps, and missing habit-tracking notifications.

The “Marketing agent” uses the same fetch tool but with different query parameters (for example, higher ratings and longer reviews) to extract what users praise. It returns structured marketing-oriented outputs: “features” framed in marketing terms and “important phrases/keywords” that can be reused in copy. Examples include an engaging interactive experience, self-care focus, reminders, ad-free usefulness, and user-friendly design, along with phrase candidates like “track your habits” and “join a supportive community.”

A “Planner agent” then synthesizes both prior reports into a product brief: app name ideas, a description, MVP features, and possible development issues. In the initial run, the MVP features can come out somewhat vague, even though the marketing copy reads well. To tighten the result, the planner agent is run again with chat-style message history: a follow-up prompt asks for deeper, feature-by-feature detail focused on habit tracking and task functionality. That second pass expands each MVP feature into implementation-level guidance (e.g., habit logging with frequency/duration/type controls, task lists with due dates and priority levels).

Overall, the workflow demonstrates how agent “teamwork” emerges from tool reuse plus different retrieval strategies, and how iterative prompting can upgrade an MVP sketch into a more actionable specification. It also highlights a production-minded approach: typed outputs, dependency injection for context, and a clear separation between data retrieval (the tool) and reasoning (the agents).

Cornell Notes

The workflow builds an agentic Python app that converts SQL-stored mobile app reviews into a structured product brief. A single tool, fetch reviews, runs parameterized SQL queries (rating ranges, review counts, minimum word length) and returns typed review records. Three Pydantic AI agents use that tool differently: the Improvement agent extracts prioritized issues and feature requests from lower/mid ratings; the Marketing agent pulls praised features and reusable keywords from higher ratings and longer reviews. A Planner agent then merges both structured outputs into app name ideas, a description, MVP features, and likely development issues. If the MVP features are too broad, a follow-up chat run with message history can expand each feature into more detailed, implementation-oriented guidance.

How does the single fetch reviews tool shape the whole multi-agent system?

All three agents rely on one tool that queries an SQL database of app reviews. The tool accepts parameters like minimum rating, maximum rating, max reviews, and minimum words per review, then returns a list of review objects containing package name, review text, and rating. Because the tool is the only external dependency, agent differences come from how they choose tool parameters—e.g., the Improvement agent pulls lower/mid ratings to find pain points, while the Marketing agent pulls higher ratings and longer reviews to find what users praise.

What structured outputs does the Improvement agent produce, and why those fields matter later?

The Improvement agent returns two typed lists: (1) issues (as a list of strings) and (2) feature requests (as a list of strings), with descriptions for each item. These structured fields feed directly into the Planner agent’s MVP planning step, where issues and feature requests become the basis for app development risks and feature priorities.

How does the Marketing agent’s retrieval strategy differ, and what does it output?

The Marketing agent calls the same fetch reviews tool but with different filters—using higher ratings (e.g., 4–5) and selecting longer reviews (via a minimum word count and max reviews). It outputs marketing-oriented features and “important phrases/keywords” suitable for copywriting, such as messaging candidates like “track your habits” and “join a supportive community.”

Why can the Planner agent’s first MVP draft be vague, and how is that fixed?

The initial Planner run synthesizes issues and marketing phrases into an MVP list, but the resulting feature descriptions may remain high-level (e.g., “habit logging system” or “grouping of habits” without implementation detail). The fix is an iterative follow-up: the planner agent is run again with message history and a targeted instruction to expand each MVP feature—especially habit tracking and task functionality—into detailed explanations.

What role do dependency injection and typed validation play in making the system production-minded?

Dependency injection passes user context (like an app description) into the agents through a dependency object, so prompts can stay consistent and configurable. Typed validation (via Pydantic AI’s validation layer) ensures model outputs conform to defined schemas (e.g., issues list, feature requests list, marketing phrases), making downstream steps like MVP synthesis more dependable than free-form text parsing.

Review Questions

  1. If you wanted the Improvement agent to focus on onboarding problems instead of account recovery, what tool parameters and prompt changes would you try first?
  2. What kinds of fields should the Planner agent’s result schema include to make the MVP brief directly actionable for engineering?
  3. How would you design a second iteration loop so that marketing keywords and MVP features stay consistent after the Planner expands vague items?

Key Points

  1. 1

    Use one SQL-backed fetch reviews tool as the shared capability across all agents, and vary only its query parameters to change what each agent learns from the same dataset.

  2. 2

    Return typed, structured objects from each agent (issues, feature requests, marketing phrases, MVP fields) so later steps can rely on consistent schema rather than parsing prose.

  3. 3

    Split responsibilities across agents: extract pain points from lower/mid ratings, extract praised value propositions from higher ratings, then synthesize both into an MVP brief.

  4. 4

    Expect the first Planner draft to be high-level; use a follow-up run with message history to request deeper, feature-by-feature implementation detail.

  5. 5

    Leverage dependency injection to pass user context (like the desired app description) into agent prompts without hardcoding it everywhere.

  6. 6

    Iterative prompting can turn an MVP sketch into a more engineering-ready specification by explicitly asking for expanded functionality per feature.

Highlights

A three-agent team can generate an MVP brief from review text by combining one reusable SQL tool with different rating/length filters per agent.
Typed outputs make the workflow robust: issues, feature requests, marketing phrases, and MVP fields arrive as structured objects that can be merged reliably.
The Planner agent’s MVP list improves after a chat-style follow-up that asks for detailed explanations of each feature, not just a summary.

Topics

  • Agentic Applications
  • Pydantic AI
  • SQL Review Mining
  • Multi-Agent Planning
  • Marketing Copy Generation

Mentioned