Get AI summaries of any video or article — Sign up free
How to Use NotebookLM (Google's AI "Tool for Understanding") thumbnail

How to Use NotebookLM (Google's AI "Tool for Understanding")

Tiago Forte·
5 min read

Based on Tiago Forte's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

NotebookLM works by loading user-provided documents as “sources,” then generating answers and drafts grounded in those specific materials.

Briefing

NotebookLM is Google’s “source-grounded” AI workspace that turns a pile of trusted documents into an interactive writing and understanding assistant—aimed at replacing the tedious parts of note-taking, organizing, and searching. Instead of asking an AI to guess from the open web, users load up to 20 documents (from Google Drive uploads, PDFs, or copied text) and then chat with the system to outline ideas, answer questions, rewrite drafts, and generate structured outputs that stay anchored to those specific sources.

The core workflow starts with creating a notebook, then adding sources in the left-hand panel. For each source, NotebookLM generates a summary and key topics, and it can draw simultaneously from multiple documents—supporting large-scale synthesis rather than one-off Q&A. The transcript highlights a major practical constraint: each document can contain up to 200,000 words, enabling interaction with roughly 4 million words across the selected sources. Once sources are loaded, the main chat area produces structured responses such as hierarchical outlines, distilled “three main things” summaries, or content rewritten into different formats (poem, narrative, FAQ). If a response is especially useful, a pin feature saves it to a top “pin board” for quick reference.

A key differentiator is how NotebookLM handles citations. When the AI generates arguments or claims, users can click citation markers to see the underlying text that supports each point, with context highlighted. That matters for work that requires traceability—like turning a complex academic paper into plain-language understanding, or drafting persuasive material where readers will want to know where each fact came from.

The transcript walks through several day-to-day use cases. One focuses on comprehension: uploading a technical PDF and asking what a concept means, then drilling into confusing ideas with follow-up questions. Another targets productivity from long documents: feeding meeting transcripts into NotebookLM to extract main points and even produce proposals. A third centers on creative writing as an editor and collaborator—rewriting drafts in a preferred style, converting bullet points into prose, and using additional books as grounded reference material.

The most ambitious writing workflow described is ideation from messy history. By exporting highlights and notes into Google Docs via Readwise, users can load a broad reading trail into NotebookLM and ask it to find connections between books, propose article angles, outline key points, and even draft attention-grabbing opening lines. The transcript emphasizes privacy and control: sources are used only within the notebook, and when leaving the notebook, the AI’s short-term memory is wiped.

Still, limitations are acknowledged. NotebookLM is experimental, with bugs and a learning curve. It currently relies on Google Docs or PDFs for sources and can’t directly ingest notes from tools like Evernote, Notion, or Google Keep—often requiring Readwise as an intermediary. There’s also a hard cap of 20 sources, plus known trouble with math, messy PDF formatting, and extracting precise details from large documents. Even with those rough edges, the pitch is that NotebookLM collapses reading, researching, note-taking, and drafting into one integrated flow—an AI collaborator trained on the data only the user provides, designed to keep people moving without constantly switching tools.

Cornell Notes

NotebookLM is Google’s source-grounded AI workspace that lets users load trusted documents (up to 20 sources) and then chat with the system to understand, outline, rewrite, and draft. The AI produces responses anchored to those documents and supports value citations, letting users click through to the exact supporting text in context. The transcript highlights large context capacity—up to 200,000 words per document and about 4 million words across selected sources—making it practical for synthesis across many notes. Common workflows include simplifying complex PDFs, extracting key points from long meeting transcripts, and using the system as an editor for creative or structured writing. For ideation, users can export reading highlights into Google Docs via Readwise, then ask NotebookLM to connect ideas, propose article angles, and draft opening lines.

How does NotebookLM keep answers tied to what a user actually provided, rather than general knowledge?

Users add documents as “sources” inside a notebook—such as Google Drive documents, uploaded PDFs, or copied/pasted text. NotebookLM then draws from those sources when generating outlines, summaries, rewrites, and proposals. A major trust feature is value citations: generated claims include citation markers that can be clicked to reveal the original supporting text, with the relevant passage highlighted in context on the source side panel.

What does the transcript say about NotebookLM’s scale limits for documents and total text?

Each selected source can contain up to 200,000 words, and users can select up to 20 documents from Google Drive. That combination yields interaction with roughly 4 million words of text—far larger than typical chat-based tools that struggle with long context.

What are the three main interaction areas described for using NotebookLM effectively?

First is the left “sources” column, where users add and manage documents and can view per-source summaries and key topics. Second is the main chat box, where users ask questions or request outputs like outlines, distilled summaries, or rewrites in different formats. Third is a pin board area: when a response is valuable, users can pin it so it stays visible and doesn’t get lost during further conversation.

How does NotebookLM help with comprehension and follow-up on dense material?

The transcript describes uploading a technical academic PDF and asking what a concept means or what role a topic plays (e.g., forgetting in learning). After the AI summarizes the paper, users can ask targeted follow-up questions—such as examples of “negative value” knowledge—so the system expands on specific points while staying grounded in the uploaded text.

Why does Readwise come up, and what problem does it solve for NotebookLM users?

NotebookLM currently expects sources in forms like Google Docs or PDFs, and it can’t directly connect to note apps such as Evernote, Notion, or Google Keep. Readwise acts as an intermediary: it exports Kindle and Readwise Reader highlights into Google Docs in a way that can fit within NotebookLM’s 20-source limit, enabling users to “converse” with a large history of highlights.

What limitations and pitfalls are explicitly called out?

The transcript notes experimental roughness (bugs) and a learning curve due to a new interface. It also highlights source constraints (Google Docs/PDFs only), the 20-source cap, difficulty with math, trouble with messy PDF formatting, and challenges extracting precise details from very large sources. The team reportedly plans fixes, but those issues remain present at the time of the walkthrough.

Review Questions

  1. What mechanisms in NotebookLM (sources, citations, context highlighting) support trust and traceability in generated answers?
  2. How would you design a workflow to turn a long meeting transcript into both a proposal and a detailed curriculum using NotebookLM?
  3. What role does Readwise play in getting a user’s reading history into NotebookLM, and what constraints does it help work around?

Key Points

  1. 1

    NotebookLM works by loading user-provided documents as “sources,” then generating answers and drafts grounded in those specific materials.

  2. 2

    Users can add up to 20 sources, with up to 200,000 words per document, enabling synthesis across millions of words.

  3. 3

    Pinned responses let users keep high-value outputs visible for later reference during ongoing work.

  4. 4

    Clickable value citations reveal the exact supporting text in context, improving traceability for claims and arguments.

  5. 5

    NotebookLM can simplify dense PDFs, extract key points from long transcripts, and generate structured documents like proposals and curricula.

  6. 6

    Readwise is often used to export Kindle and Readwise Reader highlights into Google Docs so NotebookLM can ingest them.

  7. 7

    Current limitations include experimental bugs, lack of direct integrations with Evernote/Notion/Google Keep, a 20-source cap, and difficulty with math and messy PDF formatting.

Highlights

NotebookLM’s value citations let users click a claim and see the original supporting passage highlighted in context.
With up to 20 sources and 200,000 words per document, the system is positioned for large-scale synthesis rather than single-document Q&A.
Readwise exports highlights into Google Docs, enabling NotebookLM to “converse” with a user’s broader reading history for ideation and drafting.
NotebookLM is framed as an integrated workspace that merges reading, researching, note-taking, and writing into one flow to preserve momentum.

Topics

Mentioned