The New Free AI Toolkit Every PhD Student Should Use for Literature Reviews
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use Research Rabbit to start from a seed (title/DOI/keywords) and generate a connected, visual map of related papers for faster literature discovery.
Briefing
AI is increasingly useful for literature reviews—but the biggest practical win is pairing fast discovery and summarization with citation verification so researchers don’t accidentally build their review on hallucinated references. The transcript spotlights a workflow built around Research Rabbit’s upgraded interface for mapping relevant papers, then Truess.com for checking whether generated citations actually exist via a “verify/confirm” process that can flag likely fakes in bulk.
Research Rabbit is presented as the first tool to streamline the search phase. Users can enter a paper title, DOI, or keywords (the example uses “OPV devices” plus a seed paper), and the system generates a set of papers along with connections that help visualize how studies relate. It also offers “basic search settings” to find articles or authors that are similar, reference the same work, or are linked through shared sources—turning a scattered literature hunt into a more navigable map.
The transcript then pivots to a problem that becomes acute once AI starts producing citations: distinguishing real papers from fabricated ones. Truess.com is offered as a citation verifier. The process is simple: paste citations into a box, then use “verify” and “confirm” to check each reference. A bulk “scan/copy all” approach is described, where multiple citations are checked at once—followed by a wait period (the example notes “5 hours later”). Results can include items flagged as fake, but the transcript stresses that verification isn’t perfect: some flagged citations may exist yet not appear on the specific platforms the tool searches. The takeaway is double-checking flagged items, while still using the tool to eliminate obvious errors quickly.
For finding and browsing literature directly, Semantic Scholar is positioned as the “OG” semantic search engine. Typing a topic like “Alzheimer’s disease” returns a stream of results over time, including highly cited papers (the example cites a clinical diagnosis paper with over 20,000 citations). The emphasis is on asking questions of the literature and using citation counts and relevance signals to identify influential work without needing to log in.
Once sources are collected, Notebook LM is recommended for understanding them at speed. Users upload materials and then “chat with those sources,” with options including video/audio overviews, mind maps, and flashcards. The transcript highlights generating a mind map as a way to quickly grasp major sub-areas, then drilling into specific concepts (the example mentions exploring “materials” and then follow-on terms).
Aster, from the Allen Institute, is described as a scholarly research assistant that can search and summarize papers for free, with “analyzing data” noted as coming soon. It’s framed as a way to answer research questions and return a set of papers and citations that can then be followed up.
The transcript also lists several additional “literature review” staples: SciSpace’s new AI agent for searching, reviewing, and writing reports (with a relevance score and full-text search framed as important beyond abstracts), Consensus for gauging how strongly a question is supported (including evidence from randomized controlled trials and a “consensus meter”), and Elicit for generating research reports or finding papers with options like PubMed-only or clinical trials. Across the list, the consistent message is that AI can reduce the manual grunt work of searching and summarizing—while verification and reading remain essential to produce a credible literature review.
Cornell Notes
The transcript lays out a practical AI-assisted literature review stack that speeds up discovery and synthesis while reducing the risk of citing hallucinated papers. It starts with Research Rabbit to generate a visual map of relevant studies from a title/DOI/keywords seed, then uses Truess.com to verify whether AI-generated citations actually exist (including bulk verification and the need to double-check flagged items). For searching, Semantic Scholar is presented as a free semantic engine that surfaces influential papers and citation trails. For synthesis, Notebook LM helps users upload sources and generate mind maps, summaries, and study aids. Additional tools—Aster, SciSpace, Consensus, and Elicit—support paper search, summarization, consensus checking, and research-report generation.
How does Research Rabbit turn a literature search into something easier to manage?
Why is citation verification necessary when AI helps generate references?
What does Truess.com do, and what are its limitations?
How do Semantic Scholar and Notebook LM fit into the workflow?
What roles do Consensus and Elicit play beyond summarizing papers?
Review Questions
- Which step in the workflow is specifically designed to catch hallucinated citations, and why can’t it be the only check?
- How do Research Rabbit’s “connections” and Semantic Scholar’s citation trails complement each other during the search phase?
- What study outputs does Notebook LM generate (e.g., mind maps, flashcards), and how might those outputs change how someone reads papers?
Key Points
- 1
Use Research Rabbit to start from a seed (title/DOI/keywords) and generate a connected, visual map of related papers for faster literature discovery.
- 2
Treat citation verification as mandatory when AI generates references; Truess.com provides bulk “verify/confirm” checks to reduce manual DOI-by-DOI checking.
- 3
Expect verification tools to be imperfect: some citations flagged as fake may exist but not appear on the platforms the checker searches, so double-check flagged items.
- 4
Rely on Semantic Scholar for free semantic searching and for quickly identifying influential papers using citation counts and relevance signals.
- 5
After collecting sources, use Notebook LM to interrogate uploaded literature and generate study aids like mind maps, overviews, and flashcards.
- 6
Consider Aster, SciSpace, Consensus, and Elicit as complementary tools for paper search, full-text review, consensus gauging, and research-report generation—then follow up with primary sources.