Get AI summaries of any video or article — Sign up free
Ditch The Old Google Scholar | This AI Method Finds Papers 10x Faster thumbnail

Ditch The Old Google Scholar | This AI Method Finds Papers 10x Faster

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use AI semantic search tools to retrieve papers from plain-language research questions, then export results into a reference manager.

Briefing

Starting a research project doesn’t have to mean hours of manual searching through Google Scholar. A faster workflow is to use AI-powered semantic search to build a “dump” library of potentially relevant papers first, then use AI tools to impose structure on what to read and in what order—before drilling down with seed-paper citation maps.

The process begins with research questions. Instead of relying only on keywords, semantic search tools let researchers type a plain-language question (even with imperfect spelling) and receive ranked results plus summaries. The transcript highlights several options: Elicit returns top results with brief summaries and a list of papers that can be exported into a reference manager; Consensus offers similar question-to-results retrieval and includes visual indicators of paper quality, with export options to Endnote, Zotero, and Mendeley; S2 (described as “size”) generates a set of relevant science papers from a query; and Undermind stands out by estimating how many relevant papers were likely found (e.g., “about 62 papers” or roughly “68% of all of the papers”) and offering a way to push further. The practical takeaway is to capture broadly at the beginning—potentially including weaker matches—so the researcher can filter later.

All retrieved papers are funneled into a Zotero “dump folder” (or equivalent reference manager). The goal is not immediate precision; it’s coverage. Once that initial corpus exists, the workflow shifts from collecting to understanding the field. Keywords still matter, but the transcript recommends using them more strategically: extract recurring terms while reading, or ask ChatGPT for Google Scholar search keywords tailored to the research area. For more targeted retrieval, Google Scholar Labs is positioned as the upgrade path—an AI-powered interface where detailed research questions can produce relevant papers alongside “why it matters” bullet points and easy import into reference tools.

After gathering a broad set of references, the next bottleneck is deciding what to read first. Two tools are emphasized for reading order and comprehension scaffolding. Undermind can propose a structured reading plan, including a “minimal orientation pack” that starts with foundational material before moving into deeper sections. NotebookLM can ingest multiple files and generate a navigable map of concepts; clicking through topics produces summaries of what a reader should know, helping prevent getting overwhelmed.

As reading continues, the workflow becomes deliberately curated. Papers that stand out—whether because they’re foundational, methodological, or uniquely relevant—are added to a curated reading list organized into “umbrellas” (broad topics) and subfolders (niche subtopics). When it’s time to expand a specific area, the transcript recommends seed-paper citation mapping tools: Research Rabbit (free) generates interactive maps where users can follow “similar,” references, and “cited by” paths; Connected Papers (also free) provides a graph with “prior works” and “derivative works,” which helps identify what came before and what followed a key seed paper. A paid option, Litmaps Pro, is mentioned as more user-friendly but not necessary if free alternatives are available.

Overall, the method treats literature search as stages: broad capture into a dump library, structured reading via AI-generated plans and concept maps, and targeted expansion using seed-paper networks. The payoff is speed—saving “hours and hours of work”—while still building a defensible, organized understanding of the research landscape.

Cornell Notes

The workflow prioritizes speed and coverage early: use AI semantic search tools to pull a wide set of relevant papers into a reference-manager “dump” folder, then filter and organize later. Tools like Elicit, Consensus, S2, and especially Undermind help generate ranked results and even estimate how many relevant papers were missed. After collection, Undermind and NotebookLM provide suggested reading orders and concept maps to prevent getting lost in the deep end. Once key “seed papers” are identified, Research Rabbit and Connected Papers expand the literature by following similar work, references, and citation relationships. This staged approach—dump first, curate second, map third—reduces manual searching while keeping the research process structured.

Why start with a “dump file” instead of immediately selecting only the best papers?

The transcript frames early-stage searching as a coverage problem. AI semantic search can miss relevant work or surface weaker matches; capturing everything into a dump folder (e.g., a Zotero dump) ensures the researcher won’t overlook important threads. The workflow then filters during the reading phase, using curated reading lists and reading-order suggestions to decide what truly matters.

What does Undermind add beyond returning a list of papers?

Undermind provides an estimate of retrieval completeness. In the example, it suggests it found “about 62 papers,” described as roughly “68% of all of the papers,” and offers a way to push further to find additional relevant items. That completeness estimate helps researchers understand whether their search is likely missing major parts of the literature.

How do Google Scholar Labs and keyword-based searching differ in practice?

Keyword-based searching is useful for building search terms, but it can be broad and less able to answer specific questions. Google Scholar Labs is positioned as more effective for targeted queries: it accepts detailed research questions, returns relevant papers, and includes bullet points explaining why each paper matters, with options to import into Endnote or Zotero.

How do Undermind and NotebookLM help with the “what should I read first?” problem?

Undermind can generate a suggested reading order, including a “minimal orientation pack” that starts with basics before deeper material—useful when the literature is overwhelming. NotebookLM can ingest multiple files and produce a concept map; clicking topics generates summaries of what to know, guiding the reader through sections like introductions and motivations in a structured way.

What’s the role of seed papers and citation maps in the workflow?

Seed papers act as high-relevance anchors. From a seed paper, tools like Research Rabbit and Connected Papers expand outward by following relationships such as “similar,” references, and “cited by.” Connected Papers specifically highlights “prior works” and “derivative works,” making it easier to identify what came before and what built on the seed paper.

Review Questions

  1. If you already know your research question, what’s the rationale for still using semantic search to build a broad dump folder first?
  2. How would you decide when to switch from broad umbrella searching to seed-paper expansion for a specific subtopic?
  3. What signals in Undermind’s output would tell you your search might be missing relevant papers?

Key Points

  1. 1

    Use AI semantic search tools to retrieve papers from plain-language research questions, then export results into a reference manager.

  2. 2

    Create a Zotero “dump” folder early to maximize coverage; filter and curate during reading rather than at retrieval time.

  3. 3

    Undermind’s retrieval completeness estimate (e.g., percentage of likely relevant papers found) helps decide whether to run additional searches.

  4. 4

    Use Google Scholar Labs for targeted question answering with paper explanations, and use keywords to refine searches as you learn the field.

  5. 5

    Generate a structured reading order with Undermind and concept navigation with NotebookLM to avoid getting overwhelmed.

  6. 6

    Maintain a curated reading list organized by broad umbrellas and subtopics, adding standout papers as they emerge.

  7. 7

    Expand specific areas using seed-paper networks via Research Rabbit or Connected Papers, especially leveraging “prior works” and “derivative works.”

Highlights

Undermind estimates how many relevant papers were likely found (e.g., “about 62 papers” or “68%”), turning literature search into a measurable coverage task.
A two-stage workflow—dump everything first, then curate and read in order—reduces the paralysis that comes from trying to be perfect at retrieval time.
Seed-paper mapping tools like Research Rabbit and Connected Papers make it easy to follow “similar,” “references,” and “cited by” paths to grow the literature graph.
NotebookLM’s concept map approach helps readers build knowledge step-by-step instead of jumping straight into dense sections.

Topics