Get AI summaries of any video or article — Sign up free
The New Free AI Toolkit Every PhD Student Should Use for Literature Reviews thumbnail

The New Free AI Toolkit Every PhD Student Should Use for Literature Reviews

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use Research Rabbit to start from a seed (title/DOI/keywords) and generate a connected, visual map of related papers for faster literature discovery.

Briefing

AI is increasingly useful for literature reviews—but the biggest practical win is pairing fast discovery and summarization with citation verification so researchers don’t accidentally build their review on hallucinated references. The transcript spotlights a workflow built around Research Rabbit’s upgraded interface for mapping relevant papers, then Truess.com for checking whether generated citations actually exist via a “verify/confirm” process that can flag likely fakes in bulk.

Research Rabbit is presented as the first tool to streamline the search phase. Users can enter a paper title, DOI, or keywords (the example uses “OPV devices” plus a seed paper), and the system generates a set of papers along with connections that help visualize how studies relate. It also offers “basic search settings” to find articles or authors that are similar, reference the same work, or are linked through shared sources—turning a scattered literature hunt into a more navigable map.

The transcript then pivots to a problem that becomes acute once AI starts producing citations: distinguishing real papers from fabricated ones. Truess.com is offered as a citation verifier. The process is simple: paste citations into a box, then use “verify” and “confirm” to check each reference. A bulk “scan/copy all” approach is described, where multiple citations are checked at once—followed by a wait period (the example notes “5 hours later”). Results can include items flagged as fake, but the transcript stresses that verification isn’t perfect: some flagged citations may exist yet not appear on the specific platforms the tool searches. The takeaway is double-checking flagged items, while still using the tool to eliminate obvious errors quickly.

For finding and browsing literature directly, Semantic Scholar is positioned as the “OG” semantic search engine. Typing a topic like “Alzheimer’s disease” returns a stream of results over time, including highly cited papers (the example cites a clinical diagnosis paper with over 20,000 citations). The emphasis is on asking questions of the literature and using citation counts and relevance signals to identify influential work without needing to log in.

Once sources are collected, Notebook LM is recommended for understanding them at speed. Users upload materials and then “chat with those sources,” with options including video/audio overviews, mind maps, and flashcards. The transcript highlights generating a mind map as a way to quickly grasp major sub-areas, then drilling into specific concepts (the example mentions exploring “materials” and then follow-on terms).

Aster, from the Allen Institute, is described as a scholarly research assistant that can search and summarize papers for free, with “analyzing data” noted as coming soon. It’s framed as a way to answer research questions and return a set of papers and citations that can then be followed up.

The transcript also lists several additional “literature review” staples: SciSpace’s new AI agent for searching, reviewing, and writing reports (with a relevance score and full-text search framed as important beyond abstracts), Consensus for gauging how strongly a question is supported (including evidence from randomized controlled trials and a “consensus meter”), and Elicit for generating research reports or finding papers with options like PubMed-only or clinical trials. Across the list, the consistent message is that AI can reduce the manual grunt work of searching and summarizing—while verification and reading remain essential to produce a credible literature review.

Cornell Notes

The transcript lays out a practical AI-assisted literature review stack that speeds up discovery and synthesis while reducing the risk of citing hallucinated papers. It starts with Research Rabbit to generate a visual map of relevant studies from a title/DOI/keywords seed, then uses Truess.com to verify whether AI-generated citations actually exist (including bulk verification and the need to double-check flagged items). For searching, Semantic Scholar is presented as a free semantic engine that surfaces influential papers and citation trails. For synthesis, Notebook LM helps users upload sources and generate mind maps, summaries, and study aids. Additional tools—Aster, SciSpace, Consensus, and Elicit—support paper search, summarization, consensus checking, and research-report generation.

How does Research Rabbit turn a literature search into something easier to manage?

Users enter a paper title, DOI, or keywords (the example uses “OPV devices” plus a seed paper). The tool generates a set of relevant papers and shows connections—helpful for understanding how studies relate. It also includes “basic search settings” to find similar articles/authors and to follow reference and citation relationships, producing a visual grasp of the literature rather than a flat list.

Why is citation verification necessary when AI helps generate references?

AI can produce citations that look plausible but are fabricated. The transcript emphasizes that checking whether each reference is real is a major pain when done manually, so verification tools become important to prevent a literature review from relying on hallucinated sources.

What does Truess.com do, and what are its limitations?

Truess.com accepts pasted citations and uses “verify” and “confirm” to check each one, with an option to scan/copy many references at once. It can flag items as fake, but the transcript notes it’s not perfect: some flagged citations may exist but aren’t found on the specific platforms it searches. That means flagged items still require double-checking.

How do Semantic Scholar and Notebook LM fit into the workflow?

Semantic Scholar supports the search phase by returning results for a topic (e.g., “Alzheimer’s disease”) and surfacing influential papers via citation counts (the example cites a paper with over 20,000 citations). Notebook LM supports the synthesis phase: upload sources, then use features like mind maps, audio/video overviews, and flashcards to interrogate the collected literature and quickly understand key areas.

What roles do Consensus and Elicit play beyond summarizing papers?

Consensus focuses on gauging how strongly a research question is supported, including a “consensus meter” and references to evidence types such as randomized controlled trials. Elicit helps generate research reports or find papers, with options like searching PubMed only or clinical trials, and it can list key papers with summaries and limitations.

Review Questions

  1. Which step in the workflow is specifically designed to catch hallucinated citations, and why can’t it be the only check?
  2. How do Research Rabbit’s “connections” and Semantic Scholar’s citation trails complement each other during the search phase?
  3. What study outputs does Notebook LM generate (e.g., mind maps, flashcards), and how might those outputs change how someone reads papers?

Key Points

  1. 1

    Use Research Rabbit to start from a seed (title/DOI/keywords) and generate a connected, visual map of related papers for faster literature discovery.

  2. 2

    Treat citation verification as mandatory when AI generates references; Truess.com provides bulk “verify/confirm” checks to reduce manual DOI-by-DOI checking.

  3. 3

    Expect verification tools to be imperfect: some citations flagged as fake may exist but not appear on the platforms the checker searches, so double-check flagged items.

  4. 4

    Rely on Semantic Scholar for free semantic searching and for quickly identifying influential papers using citation counts and relevance signals.

  5. 5

    After collecting sources, use Notebook LM to interrogate uploaded literature and generate study aids like mind maps, overviews, and flashcards.

  6. 6

    Consider Aster, SciSpace, Consensus, and Elicit as complementary tools for paper search, full-text review, consensus gauging, and research-report generation—then follow up with primary sources.

Highlights

Truess.com can verify many AI-generated citations at once, but flagged results still require double-checking because some real papers may not be found on the checker’s search platforms.
Research Rabbit’s upgraded workflow turns a seed query into a connected visual network of papers, making it easier to navigate literature relationships.
Notebook LM’s mind map feature is positioned as a fast way to understand the structure of a research field before deep reading.
Semantic Scholar is framed as a free, login-free semantic search baseline that surfaces influential papers through citation counts (including examples with tens of thousands of citations).