OpenAI Functions + LangChain : Building a Multi Tool Agent
Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Define each external capability as a function with a clear name, description, and a validated argument schema so the model can generate correct JSON inputs.
Briefing
OpenAI’s function-calling system, wired through LangChain, can turn a plain chat model into a finance assistant that reliably selects the right API tool, extracts the right parameters, and returns a grounded answer. The core workflow is: define one or more callable functions (with names, descriptions, and a strict parameter schema), let the model decide when to invoke them, execute the tool with the model-provided arguments, then feed the tool result back so the model can produce the final natural-language response.
The example starts with a simple “finance bot” built on the Yahoo Finance API. Users ask questions like “What is the price of Google stock?” or “Has Apple gone up over the past 90 days?” The key challenge—users won’t know ticker symbols—is handled by the model’s built-in knowledge: when asked about Apple or Google, it outputs the correct Yahoo Finance ticker behind the scenes. In the manual setup, the assistant first sends a human message plus a list of function definitions to the model. The model responds not with an answer, but with a structured function call: the function name (e.g., “get_stock_ticker_price”) and JSON arguments (e.g., the ticker for Google). The code then runs the corresponding tool using those arguments, retrieves the real price, and returns that result to the model using a dedicated function message. Only after the tool output is provided does the model generate the final response such as “The current price of Google stock is 123.83.”
LangChain then streamlines this “long way” by using an agent type designed for OpenAI functions. Instead of manually converting tools into OpenAI function formats and manually routing messages, the agent handles tool selection, argument passing, tool execution, and response synthesis. This approach is presented as an advantage over older prompt-based patterns (like ReAct-style tool prompting): it tends to improve tool selection and reasoning while reducing token waste from heavy in-context examples. Tradeoffs remain: customization is less straightforward than prompt tinkering, the setup is currently more tightly coupled to OpenAI’s function-calling conventions, and tool descriptions/schemas still consume tokens.
The finance bot expands from one tool to multiple tools. One function computes percentage price change over a time window given a ticker and a number of days; another finds the best-performing stock among a list of tickers over a specified period. The model can interpret user time expressions and convert them into the tool’s expected inputs—asking for “three months” maps to “90 days,” while “a month” maps to a shorter day count. It also handles mixed ticker formats: users can provide full company names (Google, Meta, Microsoft) and the model supplies the correct Yahoo Finance tickers. The same mechanism even works for crypto comparisons; when asked about Bitcoin over three months, the model uses Yahoo’s expected symbol format (not just a generic “BTC”), enabling comparisons across stocks and cryptocurrencies.
Overall, the transcript demonstrates a practical recipe for building multi-tool agents: strict schemas for tool inputs, function-call orchestration, and agent-based automation in LangChain—resulting in a conversational system that can answer finance questions grounded in external APIs.
Cornell Notes
OpenAI function calling plus LangChain can power a multi-tool finance agent that answers stock questions using the Yahoo Finance API. The system works by defining tools as functions with clear names, descriptions, and a Pydantic-based argument schema. The model first returns a structured function call (function name + JSON arguments) instead of a direct answer, then the tool runs and its result is sent back as a function message so the model can produce the final response. With multiple tools, the agent can compute percentage changes and pick the best-performing stock among a list. A major benefit is that the model converts natural time ranges like “three months” into the tool’s required “days” input and resolves company names to the correct Yahoo tickers.
How does function calling change the way a chat model answers a question like “What is the price of Google stock?”
Why does the tool definition need a strict argument schema (and what role does Pydantic play)?
What message types are involved in the manual orchestration flow?
What advantage does LangChain’s OpenAI Functions agent provide over manual function-call handling?
How does the system handle time expressions like “three months” when the tool expects “days”?
How can users ask for best-performing stocks using company names instead of tickers?
Review Questions
- What sequence of model outputs and tool executions is required before the assistant can produce a final price answer under function calling?
- How does the argument schema influence both correctness and error handling when building custom tools in LangChain?
- In the multi-tool setup, how does the agent ensure that natural time phrases map to the tool’s required numeric “days” input?
Key Points
- 1
Define each external capability as a function with a clear name, description, and a validated argument schema so the model can generate correct JSON inputs.
- 2
Let the model return a structured function call first; execute the corresponding tool using the provided arguments rather than trusting the model’s text answer.
- 3
Send tool results back to the model using a function message so it can ground its final response in real API data.
- 4
Use LangChain’s OpenAI Functions agent to automate tool selection, argument passing, tool execution, and response synthesis.
- 5
Add multiple tools (e.g., price change and best-performing stock) to support richer finance queries in a single conversational flow.
- 6
Rely on the model’s natural-language understanding to convert time windows like “three months” into the tool’s expected “days” parameter.
- 7
Expect tradeoffs: less prompt-level customization, tighter coupling to OpenAI’s function-calling format, and token usage for tool descriptions/schemas.