Get AI summaries of any video or article — Sign up free
olmOCR - The Open OCR System thumbnail

olmOCR - The Open OCR System

Sam Witteveen·
5 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

olmOCR is an OCR system that reconstructs LLM-ready text from PDFs that are rasterized into images, not just PDFs with extractable text layers.

Briefing

OCR for PDFs is getting a practical upgrade: Llama AI’s olmOCR is a fine-tuned vision-language model designed to turn rasterized PDF pages (including handwriting) into structured, LLM-ready output like Markdown—complete with tables, equations, and multi-column layout handling. The core payoff is straightforward: instead of scraping text from the web or relying on brittle PDF text extraction, this approach treats PDFs as images and reconstructs readable content that can be fed into RAG pipelines or context windows.

The model is built on Qwen2-VL 7B Instruct, then specialized for OCR. Llama AI positions this as a response to a persistent training-data problem: high-quality text for LLM fine-tuning often exists in PDFs, but many PDFs are “printed” into raster images, diagrams, and scanned pages rather than cleanly extractable text. olmOCR addresses that gap by training on a large image dataset—about a quarter million images—sampled from a broader collection that leans heavily on academic papers while also including handwriting and other document types such as brochures, legal documents, diagrams, and slides.

In capability terms, the model outputs more than plain text. It can emit Markdown, preserve structure for tables, and handle equations and multi-column documents. It also supports handwriting OCR, which is typically where open-source OCR systems struggle most. While no OCR system is perfect, olmOCR appears to outperform several other open-source OCR efforts mentioned in the same context, including Mara and Miner U, particularly on historical and math-heavy documents.

Llama AI also pairs the model release with tooling: an interactive demo for uploading documents (up to roughly 10 pages) and comparing results, plus a GitHub repository that includes installation instructions and fine-tuning code. That matters because it turns OCR from a one-off inference trick into something developers can adapt—whether they want to improve formatting, target a specific document domain, or retrain on their own data.

For local use, the transcript outlines a workflow using Transformers and an inference library called SG Lang, along with the Qwen2-VL processor (tokenization for both text and images). PDFs are rendered page-by-page into base64-encoded images, then passed into the model along with a prompt that requests structured JSON-like output. Practical details matter here: token limits can truncate the generated structure, so the example needs enough max tokens to ensure the output closes properly for JSON parsing.

The overall message is that olmOCR offers an on-prem alternative to cloud OCR services. With a suitable GPU (the transcript mentions an NVIDIA A100-class setup for full-resolution runs) or a quantized local option via LM Studio, teams can convert PDFs into LLM-ingestible text without sending sensitive documents off-site. For higher throughput, batch processing is recommended, and Llama AI’s toolkit is positioned as the path toward that production-style workflow.

Cornell Notes

Llama AI’s olmOCR is a fine-tuned Qwen2-VL 7B Instruct model built to OCR PDFs that are stored as images rather than extractable text. Trained on roughly 250,000 images drawn from academic papers plus handwriting and other document types, it outputs LLM-ready content in formats like Markdown, including tables, equations, and multi-column layouts. The system can run locally by rendering PDF pages to images (base64) and sending them through a Transformers-based pipeline with the Qwen2-VL processor. A demo and GitHub release provide both inference and fine-tuning code, making it easier to adapt OCR to specific document domains. This matters for RAG and context-window ingestion when privacy or cost rules out cloud OCR.

Why does OCR for PDFs remain difficult even when “OCR” exists as a concept?

Many PDFs aren’t stored as real text. When documents are “printed” into a PDF, the content is often rasterized—turning text, diagrams, and layout into images. That breaks the usual workflow of extracting plain text directly and forces OCR systems to reconstruct content from pixels, including structure like reading order, tables, and multi-column layouts.

What model is olmOCR based on, and what does that imply for its approach?

olmOCR starts from Qwen2-VL 7B Instruct and then fine-tunes it specifically for OCR. Because it’s a vision-language model, it can treat each PDF page as an image and generate structured text output, rather than relying on text-layer extraction. The transcript also notes the Qwen2-VL processor handles both text tokenization and image decoding.

What training data scale and variety does olmOCR use?

The fine-tuning uses about a quarter million images (roughly 250,000), sampled from a larger dataset. The mix includes a large share of academic papers and also handwriting, plus other document types like brochures, legal documents, diagrams, and slides—aiming to generalize across common real-world PDF formats.

What kinds of output does olmOCR produce beyond plain transcription?

The model can output Markdown and handle equations and tables. It also supports multi-column documents and can detect whether elements like tables, diagrams, and rotated text are present. The transcript’s example shows it reconstructing sections such as title, authors, abstract, and introduction in the correct order.

How does the local inference workflow work at a practical level?

The workflow renders a chosen PDF page into an image (the transcript describes base64 encoding), then passes that image along with a prompt into a Transformers pipeline. Output is generated as a structured string (intended to be parsed as JSON), so max token settings must be high enough to avoid truncation that would leave brackets unclosed.

What hardware and deployment options are mentioned for running olmOCR locally?

For full-resolution runs, the transcript calls for a GPU with substantial VRAM, mentioning an NVIDIA A100-class setup. It also notes a GGUF quantized version and a path to run quantized models via LM Studio on a PC, which supports an on-prem privacy-friendly alternative to cloud OCR.

Review Questions

  1. How does rasterized PDF content change the OCR pipeline compared with PDFs that contain a text layer?
  2. What failure mode can occur when generating structured JSON-like OCR output, and how does max token length affect it?
  3. Why might batch processing be important for converting many PDFs, and what does the transcript suggest as the production-oriented approach?

Key Points

  1. 1

    olmOCR is an OCR system that reconstructs LLM-ready text from PDFs that are rasterized into images, not just PDFs with extractable text layers.

  2. 2

    The model is fine-tuned from Qwen2-VL 7B Instruct and is trained on about 250,000 images spanning academic papers, handwriting, and other document types.

  3. 3

    Outputs can include Markdown plus structural elements like tables, equations, and multi-column reading order.

  4. 4

    A demo and GitHub release provide both an interactive way to test documents and fine-tuning code for adapting the system to custom needs.

  5. 5

    Local deployment is supported by rendering PDF pages to base64 images and running inference through a Transformers-based pipeline with the Qwen2-VL processor.

  6. 6

    Token limits matter for structured output: insufficient max tokens can truncate JSON-like results and break parsing.

  7. 7

    For scale, sequential page processing is a starting point, but batch mode is recommended for higher-volume conversion workflows.

Highlights

olmOCR targets the common “PDF-as-images” problem by using a vision-language model fine-tuned for OCR, enabling structured reconstruction instead of fragile text extraction.
Training data blends academic papers with handwriting and other real document formats, which helps the model handle both printed and handwritten content.
The system can output Markdown and preserve structure like tables, equations, and multi-column layout—key for feeding content into RAG and context windows.
Local runs are feasible via quantized options (e.g., GGUF with LM Studio) or full-resolution GPU setups, offering an on-prem alternative to cloud OCR.

Topics

Mentioned

  • OCR
  • RAG
  • LLM
  • SG Lang
  • GGUF
  • VRAM