Get AI summaries of any video or article — Sign up free
Text Splitters in LangChain | Generative AI using LangChain | Video 11 | CampusX thumbnail

Text Splitters in LangChain | Generative AI using LangChain | Video 11 | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Text splitting is a prerequisite for reliable RAG because LLMs and embedding pipelines have fixed context-length limits and large inputs reduce semantic accuracy.

Briefing

Text splitting is the practical step of breaking large documents—PDFs, articles, HTML pages, books—into smaller chunks that an LLM can handle reliably. Instead of feeding a massive input and accepting weaker answers, chunking improves output quality across common RAG tasks like semantic search, embeddings, and summarization. The core reason is hard limits: most LLMs and embedding pipelines can only process up to a fixed context length (e.g., tens of thousands of tokens). When documents exceed that threshold, the system either fails to ingest everything or produces lower-quality, less faithful results.

Beyond context limits, chunking helps because downstream steps work better with focused text windows. For embeddings, a single embedding for a very large blob often fails to capture multiple distinct topics’ semantic meaning. Splitting by natural boundaries (like paragraphs or sentences) yields embeddings that represent each topic more accurately. That improvement carries into semantic search: when queries are compared against embeddings built from smaller, topic-coherent chunks, similarity scores become more precise, so the retrieved passages match the user’s intent more closely. Summarization also benefits; large inputs can cause “drift,” where the model starts talking about nearby but irrelevant content or even introduces details not present in the source. Smaller chunks reduce that risk.

Chunking also optimizes compute. Processing smaller pieces typically uses less memory, stores fewer tokens per unit, and enables more parallel execution—important when building production RAG pipelines.

After establishing why chunking matters, the lesson moves into how to do it in LangChain, focusing on four chunking strategies. The first is length-based splitting: cut text into fixed-size segments (by characters or tokens). It’s fast and simple, but it ignores grammar, sentence boundaries, and semantics—so words or sentences can be cut midstream, and a topic can get split across chunks, weakening embeddings.

To mitigate abrupt cuts, the workflow introduces chunk overlap: adjacent chunks repeat a small number of characters so context isn’t lost at boundaries. Overlap creates a tradeoff: more overlap means more chunks and more computation, but better continuity for embedding and retrieval.

The second major approach is structure-based splitting using RecursiveCharacterTextSplitter. It tries separators in a hierarchy—paragraphs first, then sentences, then words, and finally characters—recursively until each chunk fits the target size. This preserves linguistic structure better than pure length-based splitting, especially when chunk sizes are chosen sensibly.

The third approach extends the same recursive idea to non-plain-text formats. For code and Markdown, separators change to match language constructs (e.g., class/function blocks, Markdown headings/lists). The goal is still coherent chunks, but the “boundaries” come from syntax rather than paragraphs.

The fourth approach is semantic meaning-based splitting, implemented via LangChain’s experimental SemanticChunker. It embeds sentences (using an embedding model such as OpenAI embeddings), then detects topic shifts by measuring changes in similarity across a sliding window. Breakpoints are chosen using thresholds such as standard deviation (and other statistical options like percentile or interquartile ranges). Results can be sensitive to threshold settings, and the method is described as promising but still experimental.

Overall, RecursiveCharacterTextSplitter emerges as the most reliable default for typical RAG pipelines, while semantic chunking remains an experimental option for cases where topic boundaries don’t align cleanly with length or structure.

Cornell Notes

Chunking (text splitting) is essential for RAG because LLMs and embedding systems have context-length limits and because large inputs often reduce semantic accuracy and increase “drift” during summarization. Splitting improves embeddings and semantic search by producing smaller, topic-coherent chunks, and it also reduces compute cost via lower memory use and better parallelization. LangChain supports multiple strategies: length-based splitting (fast but can cut mid-sentence), chunk overlap (to preserve boundary context), and RecursiveCharacterTextSplitter (recursive paragraph→sentence→word→character splitting). For code/Markdown, the same recursive approach uses syntax-aware separators. SemanticChunker is experimental: it embeds sentences and splits at detected topic shifts using similarity statistics like standard deviation.

Why does feeding a huge document directly into an LLM often produce weaker results in RAG systems?

Most LLMs have a maximum context length (the transcript uses an example like 50,000 tokens). If a PDF/article exceeds that limit, the system can’t ingest everything at once, so summarization or generation quality drops. Even when ingestion is possible, embeddings and retrieval degrade: a single embedding for a very large text blob may not capture multiple distinct topics’ semantic meaning, making semantic search less precise and increasing the chance of summarization drift (the model talking about adjacent or absent content).

How do embeddings and semantic search benefit from splitting text into smaller chunks?

Embeddings convert text into vectors. When a large document is embedded as one piece, the vector may fail to represent the document’s full semantic structure. Splitting into smaller, coherent chunks allows each chunk to get its own embedding, capturing topic-specific meaning better. In semantic search, the query is embedded and compared to chunk embeddings; higher similarity scores point to the most relevant chunks. The transcript contrasts improved retrieval after chunking versus weaker retrieval when embedding a single large block.

What is chunk overlap, and what tradeoff does it introduce?

Chunk overlap repeats a small region of text between adjacent chunks. This helps when length-based splitting cuts sentences or words abruptly, because the next chunk still contains some of the prior context. The tradeoff is computational: overlap increases the number of chunks, which increases storage and processing cost. The transcript suggests a practical overlap range of about 10–20 when chunk size is 100 (and scaling overlap upward as chunk size increases).

How does RecursiveCharacterTextSplitter work differently from length-based splitting?

RecursiveCharacterTextSplitter uses a hierarchy of separators: it first tries to split by larger structure (paragraphs), then sentences, then words, and only then falls back to characters. It keeps recursing until chunks fit the target size. This preserves linguistic structure and avoids many mid-sentence cuts that length-based splitting causes. The transcript also demonstrates that choosing a reasonable chunk size (e.g., 25 or 50 rather than 10) improves how well chunks align with sentences/paragraphs.

How does the approach change for code or Markdown documents?

For non-plain-text inputs, boundaries aren’t paragraph/sentence based; they come from syntax and markup constructs. The transcript describes using recursive splitting with different separators—e.g., Python constructs like class/function blocks, and Markdown structures like headings and sections. The same recursive concept applies, but separator sets are tailored to the document type so chunks remain logically coherent.

What does SemanticChunker do, and why is it considered experimental?

SemanticChunker embeds sentences and detects topic shifts by measuring similarity changes across consecutive sentences using a sliding-window approach. It computes statistics such as standard deviation of similarity/distance values and treats large deviations as breakpoints where meaning changes. The transcript notes that results can be sensitive to threshold settings and that the method is experimental, with less consistently satisfying outcomes compared to RecursiveCharacterTextSplitter.

Review Questions

  1. When context-length limits are exceeded, what two separate failure modes does chunking help prevent (one ingestion-related and one quality-related)?
  2. Compare length-based splitting and RecursiveCharacterTextSplitter: which one preserves linguistic structure by design, and how does it do so?
  3. In SemanticChunker, what statistical threshold mechanism determines where chunk boundaries are placed, and what happens when the threshold is set too high?

Key Points

  1. 1

    Text splitting is a prerequisite for reliable RAG because LLMs and embedding pipelines have fixed context-length limits and large inputs reduce semantic accuracy.

  2. 2

    Smaller, topic-coherent chunks improve embeddings, which in turn improves semantic search similarity matching and retrieval precision.

  3. 3

    Chunking reduces summarization drift by limiting how much unrelated context the model sees at once.

  4. 4

    Length-based splitting is fast but can cut mid-sentence or mid-word; chunk overlap helps preserve boundary context at the cost of more computation.

  5. 5

    RecursiveCharacterTextSplitter typically performs best by recursively splitting on paragraphs, then sentences, then words, then characters until chunk size constraints are met.

  6. 6

    For code and Markdown, chunk boundaries should follow syntax/markup separators rather than plain-text paragraph rules.

  7. 7

    SemanticChunker uses embeddings and similarity statistics to split at detected topic shifts, but it remains experimental and threshold-sensitive.

Highlights

Chunking improves RAG not just because of context limits, but because embeddings and retrieval work better when each chunk represents a single semantic unit.
Chunk overlap is a practical fix for abrupt boundary cuts: it preserves context continuity while increasing the number of chunks.
RecursiveCharacterTextSplitter’s paragraph→sentence→word→character recursion is designed to avoid the mid-sentence problems of fixed-size splitting.
SemanticChunker attempts topic-aware splitting by embedding sentences and using similarity statistics (like standard deviation) to find meaning breakpoints, but results depend heavily on threshold tuning.

Topics

  • RAG Chunking
  • LangChain Text Splitters
  • RecursiveCharacterTextSplitter
  • Chunk Overlap
  • SemanticChunker