Parallel Workflows in LangGraph | Agentic AI using LangGraph | Video 6 | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Parallel nodes must return partial state updates (only the computed keys) rather than the entire shared state to avoid conflicting writes.
Briefing
LangGraph can run truly parallel computations—but only if each parallel node updates state in a conflict-free way. The walkthrough first builds a simple cricket analytics workflow where three metrics (strike rate, boundary percentage, and balls-to-boundary) are computed simultaneously from the same inputs, then merged into a final summary. The key implementation detail is that parallel nodes must not return the entire shared state; doing so triggers an “invalid update” conflict because LangGraph can’t reconcile simultaneous writes to the same fields. The fix is “partial state updates”: each node returns only the specific key(s) it computes (as a small dictionary), allowing the graph to merge results safely.
The cricket example starts with a typed state holding raw inputs—runs, balls, fours, and sixes—and computed outputs—strike rate, boundary percentage, balls per boundary—plus a final summary string. Four nodes are added: three calculation nodes run in parallel from the Start node, and a fourth summary node concatenates the computed metrics into a single output. When the first attempt returns the full state from each parallel node, LangGraph raises an error indicating it expected only one value update per step. After switching to partial updates (returning only the computed metric per node), the workflow executes cleanly and produces consistent outputs.
A second, more realistic parallel workflow then shifts from arithmetic to LLM-based evaluation. The goal is to grade an essay on three dimensions—clarity of thought, depth of analysis, and language quality—using separate LLM calls running in parallel. Each LLM produces two structured outputs: textual feedback and a numeric score from 0 to 10. Those three parallel results feed into a final evaluation node that merges the feedback into a summarized response and computes an overall score by averaging the three numeric values.
Reliability hinges on structured output. To ensure every LLM returns the same schema every time, the workflow uses a structured-output model (e.g., GPT 4o mini) with a Pydantic-defined schema that forces outputs into a JSON-like format containing a feedback string and an integer score constrained to 0–10. The state for this workflow stores the essay text, three feedback strings, an overall feedback string, a list of individual scores, and an averaged final score.
Because the three numeric scores are produced in parallel and must accumulate rather than overwrite, the workflow uses a reducer function (operator.add) for the “individual scores” list. This prevents the last writer from replacing earlier scores and instead appends each score into a single list. The final node then averages that list and returns both the summarized feedback and the computed average score. The example concludes by demonstrating the graph with a deliberately misspelled, low-quality essay and showing that the evaluation outputs degrade accordingly—confirming the parallel LLM pipeline and structured parsing are working end-to-end.
Cornell Notes
The workflow demonstrates how to build parallel computations in LangGraph, first with a non-LLM cricket example and then with an LLM-based essay grader. The central lesson is conflict-free parallel state updates: parallel nodes should return only partial updates (the specific keys they compute), not the entire shared state, to avoid LangGraph “invalid update” errors. For the LLM version, structured output is enforced using a Pydantic schema so each LLM call returns consistent JSON-like fields: textual feedback plus an integer score from 0 to 10. Finally, a reducer function (operator.add) merges the three parallel scores into a list so they can be averaged in the final node.
Why does returning the entire state from multiple parallel nodes cause an error in LangGraph?
How are the cricket metrics computed, and how do they map to parallel nodes?
What does “partial state update” look like in practice?
How does the essay grading workflow ensure LLM outputs are reliable and machine-readable?
Why is a reducer function needed for the list of individual scores?
Review Questions
- In the cricket example, what specific change prevents the “invalid update” error when running parallel nodes?
- What two fields does the structured-output schema require from each LLM call, and how is the 0–10 constraint enforced?
- How does operator.add function as a reducer for the individual scores list, and what would likely happen without it?
Key Points
- 1
Parallel nodes must return partial state updates (only the computed keys) rather than the entire shared state to avoid conflicting writes.
- 2
A simple parallel graph can compute independent metrics (strike rate, boundary percentage, balls per boundary) from the same inputs and merge them in a final summary node.
- 3
For LLM-based grading, structured output with a Pydantic schema enforces consistent JSON-like fields: feedback text plus an integer score.
- 4
When multiple parallel nodes contribute to the same list field, a reducer function (operator.add) is required so scores accumulate instead of overwriting.
- 5
The final evaluation node can merge textual feedbacks into a summarized response and compute an overall score by averaging the accumulated individual scores.
- 6
The same LangGraph design pattern scales from non-LLM parallel arithmetic to multi-LLM parallel evaluation workflows.