Get AI summaries of any video or article — Sign up free
Using LangChain Output Parsers to get what you want out of LLMs thumbnail

Using LangChain Output Parsers to get what you want out of LLMs

Sam Witteveen·
5 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat LLM output as data that must be constrained and parsed, not as free-form text to be manually interpreted later.

Briefing

LLM apps fail most often when they accept whatever text a model happens to generate instead of forcing that output into a structure the application can reliably use. LangChain’s OutputParsers address that gap by turning free-form model responses into typed, program-ready data—so downstream code can display fields, filter results, and chain additional steps without brittle string parsing.

The walkthrough starts with a simple branding task: given a brand description, the model proposes a brand name, a “likelihood of success” score (asked for on a 1–10 scale), and a short reasoning. When the prompt is left unconstrained, the model returns a natural-language response that includes extra context—useful for humans, but awkward for an app that needs separate fields for UI elements like a fancy name, a score visualization, and a reasoning panel.

A first attempt uses prompt instructions to “format the output as JSON” with specific keys (brand name, likelihood of success, reasoning). That improves usability, but the result still arrives as a string. Worse, real-world formatting can drift—JSON may be slightly off—so converting it with a generic JSON parser can break. OutputParsers are introduced as the more robust solution: they generate format instructions to constrain the model, then parse the returned content into the expected data type.

LangChain’s Structure Output Parser is demonstrated as a baseline. A response schema defines the expected fields, and the parser injects formatting instructions into the prompt. After the model responds (often wrapped in markdown code fences), the parser extracts the JSON and returns a dictionary. This removes the “stringly-typed” problem for structure, but not for values: the likelihood score still comes back as a string, requiring manual conversion before comparisons like “show brands with score > 7.”

To eliminate that last friction, the walkthrough highlights the Pydantic output parser as the production-friendly default. A Pydantic model (e.g., BrandInfo) declares field types—most importantly, an integer score constrained to the 1–10 range. Validators can enforce formatting rules, and the parser produces stronger prompt instructions that include schema details and examples. The payoff is that the model output is converted into an actual class instance, with likelihood_of_success as a real integer rather than a string, enabling direct numeric filtering and cleaner application logic.

Two reliability mechanisms round out the picture. Output FixingParser can take a malformed response that nearly matches the schema, detect the parsing error (such as missing double quotes), and ask the LLM to rewrite the output so it satisfies the constraints. If fixing fails, a simpler retry approach can re-run parsing and generation, leveraging the stochastic nature of LLM outputs to eventually land on a valid structure.

Overall, OutputParsers turn LLM responses from “text you read” into “data your software can trust,” reducing fragile post-processing and making multi-step chains far more dependable.

Cornell Notes

The core idea is that LLM outputs must be constrained and parsed into reliable data structures before an app can use them. LangChain OutputParsers add two capabilities: they inject precise format instructions into prompts and then convert the model’s response into usable types. A basic Structure Output Parser can return a dictionary, but values like numeric scores may still arrive as strings. The Pydantic output parser solves this by enforcing a schema with field types (e.g., an integer likelihood score) and optional validators, returning a typed class instance directly. When outputs are malformed, Output FixingParser can repair formatting using the parsing error, and a retry strategy can serve as a fallback.

Why does unconstrained LLM output become a problem in real apps?

Unconstrained responses mix fields into natural language and may include extra commentary. For example, a branding prompt can return a brand name plus a score and reasoning, but the app needs separate values to render UI elements (name styling, score graphs, reasoning placement). Without structure, developers end up doing brittle string extraction or ad-hoc parsing.

How does “JSON in the prompt” help, and what still goes wrong?

Asking for JSON with specific keys (brand name, likelihood of success, reasoning) nudges the model toward a machine-readable format. However, the result still arrives as a string, and real outputs can be slightly malformed—like missing quotes or incorrect formatting—so generic JSON conversion can fail.

What does the Structure Output Parser add beyond “JSON formatting instructions”?

It defines a response schema and uses that schema to generate format instructions automatically. After the model responds (often inside markdown code fences), the parser extracts and parses the content into a dictionary. This fixes the “structure” problem, but not necessarily the “types” problem—numeric fields like the 1–10 score can still come back as strings.

How does the Pydantic output parser improve reliability and typing?

A Pydantic model declares field types (e.g., likelihood_of_success as an integer) and can include validators (e.g., score must be between 1 and 10). The parser then generates stronger prompt instructions with schema examples and converts the response into a typed class instance. The likelihood score becomes an integer immediately, enabling direct numeric comparisons without manual casting.

What are Output FixingParser and retry used for?

Both handle malformed outputs. Output FixingParser takes a parsing error (e.g., expecting property names in double quotes) and asks the LLM to rewrite the output to satisfy the constraints, then re-parses it into the target Pydantic class. If fixing doesn’t produce a valid result, a retry strategy re-attempts parsing/generation, relying on the fact that LLM outputs vary stochastically across attempts.

Review Questions

  1. When would a Structure Output Parser still require manual type conversion, and why?
  2. What specific schema features of Pydantic (field types and validators) prevent numeric scores from arriving as strings?
  3. How do Output FixingParser and retry differ in their approach to handling invalid model outputs?

Key Points

  1. 1

    Treat LLM output as data that must be constrained and parsed, not as free-form text to be manually interpreted later.

  2. 2

    Prompting for JSON helps, but it still often yields strings and can break when formatting is slightly off.

  3. 3

    Use LangChain OutputParsers to inject format instructions derived from a schema and to parse model responses into program-ready structures.

  4. 4

    Structure Output Parser improves structural reliability (dictionary output) but may still return numeric fields as strings.

  5. 5

    Pydantic output parsing enforces field types (e.g., integer score) and can validate constraints (e.g., score must be 1–10), returning typed class instances.

  6. 6

    When parsing fails, Output FixingParser can repair formatting by using the specific parsing error as feedback.

  7. 7

    If repair fails, a retry strategy can work because repeated LLM generations may eventually satisfy the schema.

Highlights

The biggest practical mistake is not controlling LLM output format—apps need structured, typed results, not just readable text.
Structure Output Parser turns JSON-like text into a dictionary, but numeric fields can still remain strings.
Pydantic output parsing is positioned as the go-to option because it returns typed objects and enforces constraints like an integer 1–10 score.
Output FixingParser can use the parser’s error message to prompt the model to rewrite malformed output into a valid schema.

Topics

Mentioned