Ditch The Old Google Scholar | This AI Method Finds Papers 10x Faster
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use AI semantic search tools to retrieve papers from plain-language research questions, then export results into a reference manager.
Briefing
Starting a research project doesn’t have to mean hours of manual searching through Google Scholar. A faster workflow is to use AI-powered semantic search to build a “dump” library of potentially relevant papers first, then use AI tools to impose structure on what to read and in what order—before drilling down with seed-paper citation maps.
The process begins with research questions. Instead of relying only on keywords, semantic search tools let researchers type a plain-language question (even with imperfect spelling) and receive ranked results plus summaries. The transcript highlights several options: Elicit returns top results with brief summaries and a list of papers that can be exported into a reference manager; Consensus offers similar question-to-results retrieval and includes visual indicators of paper quality, with export options to Endnote, Zotero, and Mendeley; S2 (described as “size”) generates a set of relevant science papers from a query; and Undermind stands out by estimating how many relevant papers were likely found (e.g., “about 62 papers” or roughly “68% of all of the papers”) and offering a way to push further. The practical takeaway is to capture broadly at the beginning—potentially including weaker matches—so the researcher can filter later.
All retrieved papers are funneled into a Zotero “dump folder” (or equivalent reference manager). The goal is not immediate precision; it’s coverage. Once that initial corpus exists, the workflow shifts from collecting to understanding the field. Keywords still matter, but the transcript recommends using them more strategically: extract recurring terms while reading, or ask ChatGPT for Google Scholar search keywords tailored to the research area. For more targeted retrieval, Google Scholar Labs is positioned as the upgrade path—an AI-powered interface where detailed research questions can produce relevant papers alongside “why it matters” bullet points and easy import into reference tools.
After gathering a broad set of references, the next bottleneck is deciding what to read first. Two tools are emphasized for reading order and comprehension scaffolding. Undermind can propose a structured reading plan, including a “minimal orientation pack” that starts with foundational material before moving into deeper sections. NotebookLM can ingest multiple files and generate a navigable map of concepts; clicking through topics produces summaries of what a reader should know, helping prevent getting overwhelmed.
As reading continues, the workflow becomes deliberately curated. Papers that stand out—whether because they’re foundational, methodological, or uniquely relevant—are added to a curated reading list organized into “umbrellas” (broad topics) and subfolders (niche subtopics). When it’s time to expand a specific area, the transcript recommends seed-paper citation mapping tools: Research Rabbit (free) generates interactive maps where users can follow “similar,” references, and “cited by” paths; Connected Papers (also free) provides a graph with “prior works” and “derivative works,” which helps identify what came before and what followed a key seed paper. A paid option, Litmaps Pro, is mentioned as more user-friendly but not necessary if free alternatives are available.
Overall, the method treats literature search as stages: broad capture into a dump library, structured reading via AI-generated plans and concept maps, and targeted expansion using seed-paper networks. The payoff is speed—saving “hours and hours of work”—while still building a defensible, organized understanding of the research landscape.
Cornell Notes
The workflow prioritizes speed and coverage early: use AI semantic search tools to pull a wide set of relevant papers into a reference-manager “dump” folder, then filter and organize later. Tools like Elicit, Consensus, S2, and especially Undermind help generate ranked results and even estimate how many relevant papers were missed. After collection, Undermind and NotebookLM provide suggested reading orders and concept maps to prevent getting lost in the deep end. Once key “seed papers” are identified, Research Rabbit and Connected Papers expand the literature by following similar work, references, and citation relationships. This staged approach—dump first, curate second, map third—reduces manual searching while keeping the research process structured.
Why start with a “dump file” instead of immediately selecting only the best papers?
What does Undermind add beyond returning a list of papers?
How do Google Scholar Labs and keyword-based searching differ in practice?
How do Undermind and NotebookLM help with the “what should I read first?” problem?
What’s the role of seed papers and citation maps in the workflow?
Review Questions
- If you already know your research question, what’s the rationale for still using semantic search to build a broad dump folder first?
- How would you decide when to switch from broad umbrella searching to seed-paper expansion for a specific subtopic?
- What signals in Undermind’s output would tell you your search might be missing relevant papers?
Key Points
- 1
Use AI semantic search tools to retrieve papers from plain-language research questions, then export results into a reference manager.
- 2
Create a Zotero “dump” folder early to maximize coverage; filter and curate during reading rather than at retrieval time.
- 3
Undermind’s retrieval completeness estimate (e.g., percentage of likely relevant papers found) helps decide whether to run additional searches.
- 4
Use Google Scholar Labs for targeted question answering with paper explanations, and use keywords to refine searches as you learn the field.
- 5
Generate a structured reading order with Undermind and concept navigation with NotebookLM to avoid getting overwhelmed.
- 6
Maintain a curated reading list organized by broad umbrellas and subtopics, adding standout papers as they emerge.
- 7
Expand specific areas using seed-paper networks via Research Rabbit or Connected Papers, especially leveraging “prior works” and “derivative works.”