Get AI summaries of any video or article — Sign up free
Building Your First Agentic AI- Financial Agent With Phidata thumbnail

Building Your First Agentic AI- Financial Agent With Phidata

Krish Naik·
5 min read

Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Phidata is positioned as an open-source framework to build, deploy, and monitor agentic systems, including multi-agent workflows.

Briefing

Agentic AI for finance becomes practical when multiple specialized agents—one for web research and one for market data—are orchestrated into a single workflow and then deployed behind an API. Using Phidata (open-source) as the framework, Krish Naik demonstrates how to turn an LLM into an agent, wire in tools like web search and yfinance, and combine independent agents into a “multi-agent” system that can answer stock questions with analyst-style recommendations and the latest news.

The build starts with Phidata’s core promise: open-source tooling to build, ship, and monitor agentic systems, plus the flexibility to use different models. The workflow is designed so the LLM can be swapped—locally or via cloud providers—while the agent logic stays the same. In this example, the model is Grok (via Grok’s hosted libraries), and the system is configured with two required credentials: a Phidata key (F data key) and a Grok API key. The setup also includes Python environment creation, dependency installation (including FastAPI for deployment), and a .env file to store secrets.

Next comes the “financial analyst” application logic, implemented in code as separate agents. A web search agent is created first using the DuckDuckGo search tool. It’s configured to search the internet for information and to always include sources in its output. In parallel, a financial agent is created using the yfinance tool. This agent pulls stock fundamentals, analyst recommendations, technical indicators, historical prices, and company news—then formats results for readability (including table-style output).

Those two agents are then combined into a single multi-agent workflow. When a user asks something like “summarize analyst recommendation and share the latest news for NVIDIA,” the system routes the request through both agents: the web search agent gathers current context from the internet, while the yfinance agent supplies market and company data. The combined output is streamed back in the terminal, showing items such as analyst sentiment and recent news themes.

A key practical hurdle appears during execution: the environment initially complains about a missing OpenAI API key even though the example is using Grok. The workaround is to set an OpenAI API key environment variable so the underlying Phidata components stop erroring. After the terminal version works, the build is extended into a deployable chatbot experience using FastAPI and Phidata’s “playground” integration. A new file (playground.py) exposes the multi-agent workflow as an endpoint, then the Phidata dashboard’s playground UI is used to interact with the agent.

In the final demonstration, the deployed agent answers questions for Tesla and can also compare stocks (e.g., Tesla vs. Nvidia) with a consolidated recommendation. The takeaway is less about one model and more about the architecture: tool-augmented agents, orchestrated into a workflow, then wrapped in an API for an interactive financial assistant experience.

Cornell Notes

Phidata is used to build an agentic financial assistant by combining two specialized agents: a web search agent (DuckDuckGo) and a market-data agent (yfinance). Each agent uses a shared LLM backbone (Grok in the example) but different tools—web context for current news and structured stock data for fundamentals, analyst recommendations, technical indicators, and company news. A multi-agent workflow merges both outputs so a single prompt (e.g., “latest news and analyst recommendation for NVIDIA”) returns a consolidated answer with sources and formatted tables. The system is then deployed behind FastAPI and connected to Phidata’s dashboard playground for interactive chat-style use. This architecture matters because it turns “one-shot Q&A” into tool-driven, end-to-end workflows that can be extended to more complex agent teams.

Why split the finance assistant into separate agents instead of one agent with all tools?

The example creates two independent agents with distinct responsibilities. The web search agent uses DuckDuckGo search to gather current information and is instructed to always include sources. The financial agent uses the yfinance tool to fetch structured market data such as analyst recommendations, company news, fundamentals, technical indicators, and historical prices. Combining them into a multi-agent workflow lets each agent focus on a narrower task, then merges the results into one response (e.g., analyst sentiment plus the latest news context).

How does the system ensure answers include both “current context” and “market fundamentals”?

Current context comes from the web search agent calling DuckDuckGo and returning information with sources. Market fundamentals come from the financial agent calling yfinance with parameters like analyst recommendation and company news (set to True in the example). When the multi-agent workflow runs, it executes both agents and then produces a single consolidated output for the user’s query (such as NVIDIA).

What role do tools play compared with the LLM itself?

Tools provide the factual inputs the LLM needs. The LLM acts as the reasoning and formatting layer, while tools fetch data. In the build, DuckDuckGo supplies web search results, and yfinance supplies stock-specific datasets (fundamentals, technical indicators, historical prices, and news). The agent configuration passes tool outputs into the model so the final response reflects real-time or tool-derived information rather than only prior training.

What does the multi-agent workflow do when a user asks about a stock?

The multi-agent workflow (team) combines the web search agent and the financial agent. For a prompt like “summarize analyst recommendation and share the latest news for NVIDIA,” the workflow triggers the web search agent to search the internet and the financial agent to query yfinance. The combined response is streamed back, showing both analyst-style conclusions and recent news items.

Why did the build require an OpenAI API key even though Grok was used?

During execution, the environment raised an error about a missing OpenAI API key. The example notes that OpenAI wasn’t intentionally used in the shown logic, but Phidata components still required an OpenAI API key to be set. The workaround was to set an OpenAI API key environment variable (e.g., via loading it from .env or setting it in the terminal) so the workflow could run successfully.

How is the terminal workflow turned into an interactive chatbot experience?

A new FastAPI-based file (playground.py) wraps the multi-agent workflow into an endpoint using Phidata’s playground integration. After running the app locally, the Phidata dashboard playground UI connects to the running endpoint (local host port). Users can then type prompts like “summarize analyst recommendations and share the latest news for Tesla” or “compare Tesla and Nvidia,” and the multi-agent system returns structured results in the chat interface.

Review Questions

  1. What specific tools are assigned to the web search agent versus the financial agent, and what kinds of outputs does each tool enable?
  2. How does the multi-agent workflow combine outputs, and what prompt example demonstrates that combined behavior?
  3. What was the practical issue with API keys during setup, and what change allowed the workflow to run?

Key Points

  1. 1

    Phidata is positioned as an open-source framework to build, deploy, and monitor agentic systems, including multi-agent workflows.

  2. 2

    A web search agent can be configured with DuckDuckGo search to retrieve current information and return sources.

  3. 3

    A financial agent can be configured with yfinance to fetch analyst recommendations, fundamentals, technical indicators, historical prices, and company news.

  4. 4

    Combining agents into a multi-agent “team” lets a single user prompt trigger both web context gathering and structured market-data retrieval.

  5. 5

    Deployment is done by wrapping the multi-agent workflow with FastAPI and exposing it through Phidata’s dashboard playground for interactive use.

  6. 6

    Even when using Grok, the setup may still require an OpenAI API key to satisfy underlying Phidata requirements; setting the missing key resolves runtime errors.

Highlights

The architecture uses two specialized agents—DuckDuckGo for news/context and yfinance for market data—then merges them into one consolidated stock recommendation response.
The build demonstrates end-to-end flow: local agent code → multi-agent orchestration → FastAPI endpoint → Phidata dashboard playground UI.
A recurring setup friction point is an OpenAI API key requirement that appears even when OpenAI isn’t directly used in the shown agent logic; setting the key unblocks execution.

Topics

Mentioned