Get AI summaries of any video or article — Sign up free
Two Workflows for Reading, Note-Taking, and Visual Thinking that Are Transforming the Way I Use AI thumbnail

Two Workflows for Reading, Note-Taking, and Visual Thinking that Are Transforming the Way I Use AI

5 min read

Based on Zsolt's Visual Personal Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use ChatGPT as a short reflective partner during audiobook listening, then import only the chapter-level summary into Obsidian.

Briefing

AI use doesn’t have to mean handing over private notes. Zsolt’s two workflows aim to keep personal information “ring-fenced” while still using AI to generate summaries, connections, and visual structures that make reading and knowledge work stick.

The first workflow targets non-fiction reading, where audio-only often fails to produce recall. Instead of treating an audiobook as the primary input, he listens to the book on Audible while keeping ChatGPT open on his phone. After roughly 5–10 minutes (or whenever an idea hits), he pauses the chapter and records a short, interactive voice conversation with ChatGPT—positioning it less as a long chat partner and more as a reflective notebook. He uses the discussion to refine articulation (especially since he’s not a native speaker) and then asks ChatGPT for a concise summary at the end of each chapter. Crucially, he imports only that AI-generated summary into Obsidian—not highlights or raw book notes—then uses a “flipped notes” layout to pair the summary with an illustration created from the same conversation. He also keeps each chapter in a separate chat so ideas don’t bleed across chapters and so the workflow stays manageable.

Several practical rules make the method work reliably. Long voice reflections can trigger interruptions, but holding the screen prevents ChatGPT from responding until he releases it. Multitasking breaks the experience; he recommends staying physically engaged—walking, pacing, or even using idle moments like standing in line or waiting at traffic lights—because listening plus voice reflection doesn’t require constant attention on the phone. He also insists on timing: generate the summary immediately after finishing the chapter, import it right away, and move on. That freshness helps catch omissions or additions ChatGPT makes compared with what he personally considered important.

The second workflow builds a visual “external loop” from a whole body of knowledge. He starts by feeding books and notes about visual thinking into NotebookLM, then prompts it to generate a podcast-style overview (with a tight 500-character prompt limit). After downloading the podcast, he converts it to MP3 with Audacity, imports it into Obsidian, and uses NotebookLM again to transcribe the audio. Next, he uploads the transcript to ChatGPT to produce a Mermaid mind map; since Excalidraw struggles with mind maps, he requests a Mermaid flowchart instead. He then imports the flowchart into Excalidraw, rearranges branches into a more map-like layout, tweaks styling, and splits the canvas so the mind map sits beside the text while he creates illustrations. The result is a connected visual index: book covers act as links into his Obsidian vault, and the map becomes a jump-off point for future exploration.

Together, the workflows aim at selective sharing—external materials like books and curated reflections rather than a dump of private notes—while still producing summaries, structure, and new perspectives. Both workflows are presented as free to try: NotebookLM, ChatGPT, Obsidian, and Excalidraw are all used without paid subscriptions, with the broader claim that motion-friendly reading and AI-assisted visual synthesis can change how information turns into usable knowledge.

Cornell Notes

The core idea is to use AI for reading and knowledge synthesis without exposing personal notes. For non-fiction, he listens to an audiobook while running short voice conversations with ChatGPT every 5–10 minutes, then imports only a chapter-level summary into Obsidian and pairs it with an illustration in a flipped-notes format. He keeps each chapter in a separate chat and generates summaries immediately to preserve recall and catch missing details. For a broader topic, he feeds a library of materials into NotebookLM, generates a podcast overview, downloads and transcribes it, then uses ChatGPT to create a Mermaid flowchart that he rebuilds in Excalidraw as a linked visual map. The payoff is a privacy-conscious, visual “external loop” that produces new perspectives and future navigation paths.

Why does the audiobook + reflection workflow focus on non-fiction, and what problem is it trying to solve?

Non-fiction often doesn’t stick when consumed as audio alone, especially for abstract or technical topics. The workflow treats the audiobook as a trigger for reflection rather than the final record: after short listening intervals (about 5–10 minutes), the reader pauses and uses ChatGPT voice to capture reflections and then requests a concise chapter summary. That summary—not raw highlights—is what gets imported into Obsidian, helping ensure recall and retention.

What exactly gets imported into Obsidian in the reading workflow, and why does that matter for privacy?

Only the chapter summary generated from the ChatGPT voice conversation is imported into Obsidian. The reader explicitly avoids importing chapter highlights or other raw book-derived artifacts. This keeps personal notes private and limits what’s shared with AI to reflections that are “ring-fenced” within the context of a small portion of the book (e.g., within chapter-level interactions).

How does the workflow prevent the reflection chat from becoming unwieldy or mixing ideas across chapters?

Each chapter uses a separate ChatGPT chat. At the start of each chat, he prompts ChatGPT that he’s listening to the book, taking mental notes, and wants short interactions rather than long conversations. This segmentation reduces cross-chapter spillover and makes it easier to manage summaries and illustrations per chapter.

What practical tactics improve the reliability of voice reflection with ChatGPT?

If reflections run long, ChatGPT may interrupt. A workaround is to press and hold the screen so ChatGPT won’t respond until the hold is released. Also, multitasking breaks the experience; he recommends pacing or walking so attention stays on listening plus reflection rather than browsing or drifting.

How does the second workflow turn a large knowledge set into a navigable visual map?

NotebookLM is used as an intermediate “knowledge engine”: books/notes are uploaded, NotebookLM generates a podcast overview, the podcast is downloaded (converted to MP3 via Audacity), and NotebookLM transcribes the audio. The transcript is then sent to ChatGPT to generate a Mermaid diagram, which is converted into an Excalidraw-compatible flowchart. In Excalidraw, branches are rearranged and styled, and book covers become clickable links into the Obsidian vault—turning the map into a future exploration index.

Why does the workflow switch from Mermaid mind maps to flowcharts?

Excalidraw (specifically its Mermaid processing) doesn’t handle mind maps well. After an initial Mermaid mind map attempt, the workflow requests a Mermaid flowchart instead, which imports cleanly. The flowchart is then rearranged into a map-like layout inside Excalidraw.

Review Questions

  1. In the non-fiction reading workflow, what is the minimum unit of AI output that gets imported into Obsidian, and what does that choice protect?
  2. What sequence of tools transforms a topic library into a linked visual map in the NotebookLM → podcast → transcript → Mermaid → Excalidraw workflow?
  3. Which timing and chat-structure rules (e.g., separate chats per chapter, immediate summaries) are used to improve recall and prevent idea mixing?

Key Points

  1. 1

    Use ChatGPT as a short reflective partner during audiobook listening, then import only the chapter-level summary into Obsidian.

  2. 2

    Pause every 5–10 minutes (or when an idea triggers) to keep reflections grounded and summaries accurate.

  3. 3

    Generate and import the chapter summary immediately after finishing the chapter to preserve freshness and reduce memory loss.

  4. 4

    Keep each chapter in a separate ChatGPT chat with a prompt that limits conversation length to prevent cross-chapter contamination.

  5. 5

    Avoid multitasking during the reflection process; pace or walk to stay engaged while using voice reflection.

  6. 6

    For topic-level synthesis, feed a curated knowledge set into NotebookLM, generate a podcast overview, download and transcribe it, then convert the transcript into a Mermaid flowchart.

  7. 7

    Build the final visual map in Excalidraw with clickable links back to Obsidian so the output becomes a navigable index for future exploration.

Highlights

Non-fiction audio works poorly for recall when consumed passively; the workflow fixes this by pausing frequently and turning reflections into chapter summaries.
Privacy is handled by importing only AI-generated summaries into Obsidian—avoiding direct import of personal highlights or raw notes.
A separate ChatGPT chat per chapter prevents ideas from blending and keeps summaries manageable.
NotebookLM can act as a “knowledge-to-story” engine: podcast overview → transcript → Mermaid diagram → Excalidraw map.
Excalidraw’s Mermaid mind map limitations push the workflow toward Mermaid flowcharts for reliable diagram rendering.

Topics

  • Non-fiction Reading
  • Privacy-Conscious AI
  • Voice Reflection
  • Visual Knowledge Maps
  • NotebookLM Workflows