Chains in LangChain | Generative AI using LangChain | Video 7 | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Chains connect prompt/model/parsing steps into a single pipeline so intermediate outputs automatically feed later steps.
Briefing
LangChain chains turn a multi-step LLM workflow from a manual, “call-everything-separately” process into a connected pipeline where each step automatically feeds the next. Instead of hand-wiring prompt templates, invoking the model, and parsing outputs one by one, a chain links these components so the first step’s output becomes the second step’s input, and so on—making complex applications easier to build, reuse, and scale.
The walkthrough starts with the core problem: even a simple LLM app typically has at least three steps—collect user input, send a prompt to the model, then process the model’s response for display. Doing those steps manually becomes tedious as workflows grow. Chains solve this by creating a pipeline: provide input once, trigger execution once, and let the chain handle the intermediate data flow. The lesson then expands beyond a single linear pipeline, showing that chains can be structured in different ways: sequential (steps run in order), parallel (multiple chains run at the same time), and conditional (only one branch runs depending on a condition).
A first “simple chain” example demonstrates the mechanics. A prompt template asks for “five interesting facts” about a user-supplied topic. The chain then calls a chat model (via ChatOpenAI) and uses a StringOutputParser to extract the final string output. The chain is assembled using LangChain’s pipe-style syntax (the “|” operator), and it can be invoked with a single input variable (the topic). The workflow is also visualizable using a graph function (get_graph) and printing the chain structure.
Next comes a sequential chain that calls the LLM twice. The first prompt generates a detailed report on a topic; the second prompt takes that report and extracts five key points. The chain remains declarative—prompt → model → parser—just repeated across steps, so the second LLM call consumes the first call’s output automatically.
The parallel chain example builds an application that takes a long input text (e.g., a document) and produces two outputs at once: short notes and quiz questions. Two different models are used in parallel—ChatOpenAI for notes and ChatAnthropic (with model name “claude-3”) for quiz generation—then a third step merges both results into a single document. This parallelism is implemented with RunnableParallel, where each sub-chain is named (e.g., “notes” and “quiz”) and executed concurrently.
Finally, conditional chains are introduced with a sentiment-based feedback scenario. A first step classifies user feedback as positive or negative. To make branching reliable, the output is structured using a PydanticOutputParser so the sentiment is constrained to either “positive” or “negative.” RunnableBranch then routes execution: if sentiment is positive, it generates a positive reply; if negative, it generates an appropriate negative-response message. A default path is handled via a RunnableLambda wrapper. The result is a single chain that executes only the relevant branch, producing consistent outputs.
By the end, the key takeaway is that mastering sequential, parallel, and conditional chains unlocks the core building blocks for larger LangChain applications—including agent-style systems later—while the next step in the learning path is understanding Runnable, the underlying concept that powers how chains execute behind the scenes.
Cornell Notes
LangChain chains connect multiple LLM steps into one pipeline so outputs automatically flow from one stage to the next. The tutorial contrasts manual workflows (prompt → model call → parse) with chain-based execution where a single trigger runs the whole process. It demonstrates three chain types: sequential chains (two LLM calls in order, e.g., detailed report then key-point extraction), parallel chains (notes and quizzes generated concurrently with RunnableParallel, then merged), and conditional chains (sentiment classification routes to different reply templates using RunnableBranch). Consistency is improved by enforcing structured outputs with PydanticOutputParser so branching depends on predictable values like “positive” or “negative.”
Why do chains matter compared with manually calling prompts, models, and parsers?
How does a sequential chain work in the tutorial’s “report then summary” example?
What does parallelism add, and how is it implemented for notes and quizzes?
Why is structured output (PydanticOutputParser) important for conditional branching?
How does the conditional chain decide which reply to generate?
Review Questions
- In what way does a chain reduce developer effort compared to manually invoking the model and parsing outputs at each step?
- Describe one example of sequential chaining and one example of parallel chaining from the tutorial, including what each LLM call produces.
- What two mechanisms make conditional branching reliable in the sentiment example: one for output consistency and one for routing execution?
Key Points
- 1
Chains connect prompt/model/parsing steps into a single pipeline so intermediate outputs automatically feed later steps.
- 2
Sequential chains run multiple LLM calls in order, enabling workflows like “generate report” then “extract key points.”
- 3
Parallel chains use RunnableParallel to run independent sub-tasks concurrently, then merge results into one output.
- 4
Conditional chains use structured classification plus RunnableBranch so only the correct branch executes based on a predictable condition.
- 5
PydanticOutputParser helps enforce consistent, schema-constrained outputs (e.g., sentiment is strictly “positive” or “negative”).
- 6
RunnableLambda can wrap non-chain logic to serve as a default branch in conditional routing.
- 7
Chains can be visualized with a graph function (get_graph) to inspect the pipeline structure.