Get AI summaries of any video or article — Sign up free
Chains in LangChain | Generative AI using LangChain | Video 7 | CampusX thumbnail

Chains in LangChain | Generative AI using LangChain | Video 7 | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Chains connect prompt/model/parsing steps into a single pipeline so intermediate outputs automatically feed later steps.

Briefing

LangChain chains turn a multi-step LLM workflow from a manual, “call-everything-separately” process into a connected pipeline where each step automatically feeds the next. Instead of hand-wiring prompt templates, invoking the model, and parsing outputs one by one, a chain links these components so the first step’s output becomes the second step’s input, and so on—making complex applications easier to build, reuse, and scale.

The walkthrough starts with the core problem: even a simple LLM app typically has at least three steps—collect user input, send a prompt to the model, then process the model’s response for display. Doing those steps manually becomes tedious as workflows grow. Chains solve this by creating a pipeline: provide input once, trigger execution once, and let the chain handle the intermediate data flow. The lesson then expands beyond a single linear pipeline, showing that chains can be structured in different ways: sequential (steps run in order), parallel (multiple chains run at the same time), and conditional (only one branch runs depending on a condition).

A first “simple chain” example demonstrates the mechanics. A prompt template asks for “five interesting facts” about a user-supplied topic. The chain then calls a chat model (via ChatOpenAI) and uses a StringOutputParser to extract the final string output. The chain is assembled using LangChain’s pipe-style syntax (the “|” operator), and it can be invoked with a single input variable (the topic). The workflow is also visualizable using a graph function (get_graph) and printing the chain structure.

Next comes a sequential chain that calls the LLM twice. The first prompt generates a detailed report on a topic; the second prompt takes that report and extracts five key points. The chain remains declarative—prompt → model → parser—just repeated across steps, so the second LLM call consumes the first call’s output automatically.

The parallel chain example builds an application that takes a long input text (e.g., a document) and produces two outputs at once: short notes and quiz questions. Two different models are used in parallel—ChatOpenAI for notes and ChatAnthropic (with model name “claude-3”) for quiz generation—then a third step merges both results into a single document. This parallelism is implemented with RunnableParallel, where each sub-chain is named (e.g., “notes” and “quiz”) and executed concurrently.

Finally, conditional chains are introduced with a sentiment-based feedback scenario. A first step classifies user feedback as positive or negative. To make branching reliable, the output is structured using a PydanticOutputParser so the sentiment is constrained to either “positive” or “negative.” RunnableBranch then routes execution: if sentiment is positive, it generates a positive reply; if negative, it generates an appropriate negative-response message. A default path is handled via a RunnableLambda wrapper. The result is a single chain that executes only the relevant branch, producing consistent outputs.

By the end, the key takeaway is that mastering sequential, parallel, and conditional chains unlocks the core building blocks for larger LangChain applications—including agent-style systems later—while the next step in the learning path is understanding Runnable, the underlying concept that powers how chains execute behind the scenes.

Cornell Notes

LangChain chains connect multiple LLM steps into one pipeline so outputs automatically flow from one stage to the next. The tutorial contrasts manual workflows (prompt → model call → parse) with chain-based execution where a single trigger runs the whole process. It demonstrates three chain types: sequential chains (two LLM calls in order, e.g., detailed report then key-point extraction), parallel chains (notes and quizzes generated concurrently with RunnableParallel, then merged), and conditional chains (sentiment classification routes to different reply templates using RunnableBranch). Consistency is improved by enforcing structured outputs with PydanticOutputParser so branching depends on predictable values like “positive” or “negative.”

Why do chains matter compared with manually calling prompts, models, and parsers?

Manual LLM apps often require repeated boilerplate: build a prompt template, call the model (e.g., via invoke), then parse and display the response. As workflows grow, each intermediate step becomes something developers must orchestrate and pass around. Chains replace that with a connected pipeline: the first step’s output becomes the second step’s input, and the second step’s output becomes the third step’s input, so only the first input needs to be provided and the chain execution handles the rest.

How does a sequential chain work in the tutorial’s “report then summary” example?

A sequential chain runs steps in order. First, a prompt asks for a detailed report on a topic; the model generates the report; a StringOutputParser extracts the text. Then a second prompt takes that report as input and asks for five key points. The second LLM call consumes the first call’s output automatically, so the developer doesn’t manually extract and re-inject intermediate results.

What does parallelism add, and how is it implemented for notes and quizzes?

Parallelism reduces latency by generating independent outputs at the same time. The tutorial uses RunnableParallel to run two sub-chains concurrently: one chain sends the input text to ChatOpenAI to generate short notes, while another sends the same text to ChatAnthropic (model name “claude-3”) to generate quiz questions. After both complete, a merge step combines notes and quizzes into a single final document.

Why is structured output (PydanticOutputParser) important for conditional branching?

Conditional routing depends on the classifier’s output being consistent. Without structure, the model might return unexpected text (e.g., extra sentences) that breaks the branching logic. By using PydanticOutputParser with a schema that constrains sentiment to a literal set (only “positive” or “negative”), the chain can reliably branch based on sentiment_result.sentiment.

How does the conditional chain decide which reply to generate?

The chain first classifies feedback sentiment, then uses RunnableBranch to select a path. If sentiment equals “positive,” it triggers the “positive reply” prompt/model/parser chain; if sentiment equals “negative,” it triggers the “negative reply” chain. A default fallback is provided by wrapping a simple function in RunnableLambda so the chain still returns something even if no condition matches.

Review Questions

  1. In what way does a chain reduce developer effort compared to manually invoking the model and parsing outputs at each step?
  2. Describe one example of sequential chaining and one example of parallel chaining from the tutorial, including what each LLM call produces.
  3. What two mechanisms make conditional branching reliable in the sentiment example: one for output consistency and one for routing execution?

Key Points

  1. 1

    Chains connect prompt/model/parsing steps into a single pipeline so intermediate outputs automatically feed later steps.

  2. 2

    Sequential chains run multiple LLM calls in order, enabling workflows like “generate report” then “extract key points.”

  3. 3

    Parallel chains use RunnableParallel to run independent sub-tasks concurrently, then merge results into one output.

  4. 4

    Conditional chains use structured classification plus RunnableBranch so only the correct branch executes based on a predictable condition.

  5. 5

    PydanticOutputParser helps enforce consistent, schema-constrained outputs (e.g., sentiment is strictly “positive” or “negative”).

  6. 6

    RunnableLambda can wrap non-chain logic to serve as a default branch in conditional routing.

  7. 7

    Chains can be visualized with a graph function (get_graph) to inspect the pipeline structure.

Highlights

Chains eliminate manual orchestration by wiring step outputs to step inputs, so a single trigger can run the entire workflow.
Sequential chaining enables multi-stage reasoning workflows like “detailed report → five-point summary” without hand-passing intermediate text.
Parallel chaining uses RunnableParallel to generate notes and quiz questions at the same time, then merges them.
Conditional chaining becomes dependable when the classifier output is structured with PydanticOutputParser and routed via RunnableBranch.

Topics

  • LangChain Chains
  • Sequential Pipelines
  • Parallel Execution
  • Conditional Branching
  • Structured Output Parsing

Mentioned