Note-taking for research (and how to "chat" with it)
Based on Reflect Notes's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start by listing every source type and specifying exactly where each will be captured (e.g., YouTube vs. Vimeo, Chrome link storage, Kindle vs. physical books).
Briefing
A practical research workflow hinges on two moves: capture sources in a structured way, then turn that captured material into searchable, AI-assisted outputs. The core idea is to build a “research brain” where every link, highlight, and transcription is saved with consistent tags—so later you can quickly retrieve relevant notes and even generate drafts (posts, outlines, summaries) using only the notes you’ve already collected.
The process starts before any writing. First, list every source category we’ll draw from—articles, blog posts, research papers, videos, books, and even interviews or lectures. For each source, get specific about where the information will be captured from (e.g., YouTube vs. Vimeo, where links are saved in Chrome, Kindle vs. physical books). Then decide the capture method for each format: a browser extension for article highlights, a Kindle sync flow for highlighted passages in books, and a voice transcriber for interviews, lectures, or spoken notes. The workflow emphasizes not just saving “a video” or “a book,” but saving the exact text or segments you care about.
For web articles, the Chrome extension workflow is straightforward: highlight text inside an article, save those highlights into Reflect, and keep the original link so the material can be revisited later. The transcript notes that saving everything can be useful when learning or researching deeply, because AI can distill the important parts later. Each saved item can include a description field that doubles as a tagging mechanism—adding a tag like “B2G research” so related notes cluster automatically. Instead of building a messy network of dedicated notes and backlinks, the workflow favors tags as the primary organizing layer.
Once material is saved, organization shifts from raw highlights to digestible summaries. When a note contains a large block of highlights, the workflow uses an AI assistant to extract “key takeaways,” producing a distilled section that’s easier to reuse across projects. This distillation step can be repeated across multiple saved sources, turning an overwhelming pile of text into a set of reusable insights.
Search and retrieval are where tags pay off. Advanced search can filter by tag first, then refine results using either exact keyword matching or semantic search—useful when the exact phrase isn’t remembered. For example, searching within “B2G research” for “local government” surfaces notes that mention that topic, while semantic search can find related concepts even if the exact wording (like “sales flow”) never appears.
Finally, the workflow supports “chatting with research.” A chat interface uses only the notes associated with the selected tag as context, enabling tasks like summarizing themes or drafting content. The transcript gives an example of generating a LinkedIn draft post about B2G sales, then iterating for concision and a more human tone. Beyond chat, the AI assistant can also format outputs directly—such as generating tweet suggestions, article outlines, or other structured drafts—making the saved research immediately actionable for writing, publishing, or paper work.
Cornell Notes
The workflow centers on capturing research sources into Reflect with consistent tags, then using AI to distill and reuse that material. Sources are first listed by type (articles, videos, books, interviews), and each type gets a specific capture method: a Chrome extension for web highlights, Kindle sync for highlighted passages, and a voice transcriber for spoken or video notes. Saved items include tags (e.g., “B2G research”) so they can be retrieved later through advanced search, using exact or semantic matching. Large highlight blocks become usable by running AI prompts like “key takeaways.” Finally, a chat interface can generate drafts (LinkedIn posts, tweets, outlines) using only the tagged notes as context.
Why does the workflow emphasize tags over backlinks for organizing research notes?
What’s the recommended approach for capturing information from web articles?
How does the workflow turn a pile of highlights into something usable?
How does advanced search work when the exact wording isn’t remembered?
What does “chatting with research” do differently from generic AI chat?
What capture options are mentioned beyond web links?
Review Questions
- If you’re researching a new topic and don’t know what details you’ll need later, which capture strategy in this workflow helps most, and why?
- How would you design a tag-based system so you can later find notes even when you don’t remember the exact phrase used in the source?
- What are two ways the workflow turns raw research into publishable material besides plain summarization?
Key Points
- 1
Start by listing every source type and specifying exactly where each will be captured (e.g., YouTube vs. Vimeo, Chrome link storage, Kindle vs. physical books).
- 2
Use consistent tagging in Reflect (such as “B2G research”) so related notes can be retrieved as a group later.
- 3
For web articles, highlight and save text via the Reflect Chrome extension, keeping the original link for easy revisiting.
- 4
Distill large highlight sets using AI prompts like “key takeaways” to convert raw notes into reusable summaries.
- 5
Use advanced search filtered by tag, then choose exact matching or semantic search depending on whether you remember the exact wording.
- 6
Generate drafts through “chat with research,” where the AI draws context only from notes tied to the selected tag.
- 7
Save custom AI prompts for recurring output needs (e.g., LinkedIn posts, tweet suggestions, article outlines) to speed up future writing.