Get AI summaries of any video or article — Sign up free
Langchain Runnables - Part 2 | Generative AI using LangChain | Video 9 | CampusX thumbnail

Langchain Runnables - Part 2 | Generative AI using LangChain | Video 9 | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Runnables standardize component interoperability by using a shared runnable interface (notably `invoke`), enabling automatic output-to-input wiring.

Briefing

LangChain’s “runnables” are built to solve a practical integration problem: earlier LangChain components (prompt templates, LLM calls, parsers, retrievers) didn’t share a single, consistent interface, which made it awkward to wire them into flexible workflows. The standardization centers on a common method—most notably an `invoke` call—so outputs from one component can automatically become inputs to the next. Under the hood, this is achieved by converting components into classes that inherit from a shared runnable abstraction, letting engineers connect building blocks without constantly rewriting glue code.

With that foundation, the workflow logic becomes easier to reason about because runnables come in two categories. “Task-specific runnables” are the converted core components with their own purpose—prompt templates for designing prompts, LLM wrappers for generating text, parsers for shaping outputs, and so on. “Runnable primitives” are orchestration tools that connect task-specific runnables into more complex execution patterns, including sequential, parallel, conditional, and custom transformation steps.

The video then focuses on runnable primitives one by one, starting with `RunnableSequence`, which chains multiple runnables so they execute in order. A typical example builds a joke pipeline: a prompt template feeds a `ChatOpenAI` model, and a `StringOutputParser` formats the result. The same idea scales to longer flows, such as generating a joke and then sending it back through a second prompt to produce an explanation.

Next comes `RunnableParallel`, which runs multiple branches at the same time using the same input. The example sends a topic to two prompt templates in parallel—one generates a tweet, the other generates a LinkedIn post—then returns both outputs as a dictionary (e.g., keys like `tweet` and `linkedin`). This is positioned as the right primitive when independent outputs are needed concurrently.

`RunnablePassthrough` is introduced as a special utility: it returns exactly what it receives, unchanged. That “no-op” behavior becomes useful when a workflow needs to both transform something and also preserve the original value for downstream steps. The video demonstrates this by generating a joke and, in parallel, generating its explanation—while using passthrough so the original joke text is still available for the explanation branch.

Then `RunnableLambda` is presented as the bridge between LangChain and custom Python logic. Any Python function can be wrapped into a runnable, letting engineers insert preprocessing or postprocessing steps inside a chain. The example counts the number of words in the generated joke using a Python word-counter function (wrapped as `RunnableLambda`) while simultaneously passing the joke through unchanged.

Finally, `RunnableBranch` enables conditional execution, acting like an `if/else` structure inside a runnable graph. The example generates a detailed report, checks its word count, and either prints it as-is (when under a threshold) or calls the LLM again to summarize it (when over the threshold). The video also highlights a practical syntax improvement: LangChain Expression Language (LCEL) introduces a more declarative way to define sequential chains using a pipe-style operator, making common chaining patterns cleaner than manually constructing `RunnableSequence` objects.

Overall, the key takeaway is that runnable primitives turn standardized components into composable execution graphs—sequential pipelines, parallel fan-out, conditional branches, and custom logic—so complex generative AI workflows can be built with less wiring friction and more control over execution behavior.

Cornell Notes

LangChain runnables standardize how AI components connect by using a shared interface (notably `invoke`), so outputs from one step can feed inputs to the next. Runnables split into two groups: task-specific runnables (prompt templates, LLM calls, parsers) and runnable primitives (orchestration building blocks). The video walks through key primitives: `RunnableSequence` for ordered pipelines, `RunnableParallel` for concurrent branches returning a dictionary of outputs, `RunnablePassthrough` for “no change” value forwarding, `RunnableLambda` for wrapping custom Python functions (e.g., word counting), and `RunnableBranch` for conditional logic based on intermediate results (e.g., summarize a report only if it exceeds a word limit). LCEL then offers a more declarative syntax for sequential chaining using a pipe operator.

Why did LangChain introduce runnables, and what problem do they solve when building chains?

Earlier LangChain components weren’t consistently standardized: prompt-related, LLM-related, parser-related, and retriever-related pieces used different method names and calling patterns. Runnables standardize interaction by converting components into classes that inherit from a common runnable abstraction and implement a shared interface (centered on `invoke`). That means a chain can automatically pass the output of one runnable as the input to the next, enabling flexible workflows without custom glue code for every component type.

What’s the difference between task-specific runnables and runnable primitives?

Task-specific runnables are the core converted components with a specific job—e.g., prompt templates for prompt construction, LLM wrappers for text generation, and output parsers for formatting. Runnable primitives are orchestration tools that connect those task-specific runnables into execution graphs. In this transcript, the primitives emphasized are `RunnableSequence`, `RunnableParallel`, `RunnablePassthrough`, `RunnableLambda`, and `RunnableBranch`.

How does `RunnableSequence` work, and what’s a concrete example from the transcript?

`RunnableSequence` executes multiple runnables in order. If R1 produces an output, that output becomes the input to R2, and so on. The example builds a joke pipeline: a prompt template feeds `ChatOpenAI`, and a `StringOutputParser` formats the model output. A second example extends the flow by adding another prompt step to explain the generated joke, showing that multiple sequential stages can be chained.

What does `RunnableParallel` return, and why is it useful?

`RunnableParallel` runs branches independently but using the same input, then returns results as a dictionary. The transcript’s example sends the same topic to two branches: one generates a tweet and the other generates a LinkedIn post. Because both branches are independent, parallel execution is appropriate, and the dictionary output makes it easy to extract each result (e.g., `tweet` and `linkedin`).

When would `RunnablePassthrough` be the right primitive?

`RunnablePassthrough` returns the input unchanged. It’s useful when a workflow needs both the original value and a transformed value downstream. The transcript demonstrates this by generating a joke, then also generating an explanation of that joke in parallel. Passthrough ensures the original joke text is preserved and forwarded so the explanation branch can use it.

How do `RunnableLambda` and `RunnableBranch` differ in purpose?

`RunnableLambda` wraps custom Python logic into a runnable so it can be inserted into a chain (e.g., counting words in a generated joke). `RunnableBranch` controls execution flow conditionally—like an `if/else`—based on intermediate results. The transcript’s report example generates a report, checks whether it exceeds a word threshold (e.g., 500), and either summarizes it or prints it as-is depending on the condition.

Review Questions

  1. In a chain that generates a joke and then produces both the joke and its explanation, where would `RunnablePassthrough` fit and why?
  2. Design a conditional workflow using `RunnableBranch`: what intermediate value would you compute, and what would each branch do?
  3. Explain how `RunnableParallel` differs from `RunnableSequence` in terms of execution timing and output structure.

Key Points

  1. 1

    Runnables standardize component interoperability by using a shared runnable interface (notably `invoke`), enabling automatic output-to-input wiring.

  2. 2

    LangChain runnables split into task-specific runnables (prompt/LLM/parser components) and runnable primitives (orchestration: sequence, parallel, passthrough, lambda, branch).

  3. 3

    `RunnableSequence` builds ordered pipelines where each step’s output becomes the next step’s input.

  4. 4

    `RunnableParallel` executes multiple branches concurrently with the same input and returns a dictionary of outputs.

  5. 5

    `RunnablePassthrough` forwards inputs unchanged, which is useful when downstream steps need the original value alongside transformed outputs.

  6. 6

    `RunnableLambda` lets custom Python functions become chain-compatible runnables for preprocessing/postprocessing logic.

  7. 7

    `RunnableBranch` enables conditional execution based on intermediate results, such as summarizing only when a report exceeds a word-count threshold.

Highlights

Standardization around `invoke` turns previously mismatched LangChain components into plug-and-play workflow steps.
`RunnableParallel` returns a dictionary of branch outputs, making it straightforward to generate multiple artifacts (tweet + LinkedIn post) from one input.
`RunnablePassthrough` is a practical “keep the original” tool when one branch needs the unmodified intermediate value.
`RunnableBranch` behaves like an `if/else` inside a runnable graph, enabling word-count-based summarization logic.
LCEL’s pipe-style syntax offers a more declarative way to define sequential chains than manually constructing `RunnableSequence`.

Topics

  • LangChain Runnables
  • RunnableSequence
  • RunnableParallel
  • RunnablePassthrough
  • RunnableLambda
  • RunnableBranch
  • LCEL

Mentioned