Get AI summaries of any video or article — Sign up free
The fastest way to do your literature review with AI thumbnail

The fastest way to do your literature review with AI

5 min read

Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Check university and journal guidelines before using AI in research, since some rules restrict or ban AI use.

Briefing

AI can compress a literature review from months of reading and drafting into a structured workflow—if it’s used to (1) select the right papers, (2) read with purpose, (3) organize notes into a logical “pyramid,” and (4) write without padding or plagiarism risk. The payoff is speed plus a clearer academic argument, which is what supervisors and journal reviewers tend to reward.

The first priority is avoiding plagiarism and accuracy traps. Universities and journals may restrict AI use, so researchers are urged to check institutional and journal guidelines before relying on any tool. Another risk is hallucination—AI can invent details, including fake references—so outputs must be verified against original sources. Finally, the safest practice is to avoid generating entire passages with AI; instead, AI should help brainstorm structure, tighten wording, and support critical thinking.

With those guardrails in place, the workflow starts by narrowing the paper pool. Rather than searching endlessly, the method recommends focusing on recent work (typically the last five years) and applying exclusion criteria such as publication type (journal articles vs. book chapters), study design (e.g., randomized control trials), methodology (qualitative vs. quantitative), and context (who/what was studied and where). After scanning titles and abstracts using those criteria, the target is about 50 papers—enough to build a strong literature review without drowning in hundreds.

Reading then becomes more efficient by changing the purpose of each paper. A common failure mode is reading full texts because they appeared in search results. Instead, readers should ask why they’re reading each paper—such as identifying suggestions for future research—and then jump to the most relevant sections (often discussion or conclusion). AI can accelerate this “purpose-driven” reading using tools like Scispace and Avidnote. Scispace supports uploading PDFs or searching for papers, then summarizing them into adjustable columns and enabling “chat with document” to extract specific elements (for example, dependent variables or contributions). Avidnote similarly reads uploaded PDFs and generates targeted questions, including document-specific ones, but still requires verification.

Once the notes exist, structure is built using a pyramid framework. The “apex” is the key takeaway message—often the aim of the study. From there, the “steps” move from general to specific: main topics derived from the study aim, then subtopics ordered from broad to narrow, and finally paragraph-level claims supported by examples and facts. Two AI tools can generate outlines for this structure: Scispace’s Outline Builder and Jenny, which can produce more detailed, section-specific headings, time allocations, and references.

Writing is treated as an argument-building exercise, not a word-count exercise. The transcript warns against “waffling”—long, low-information prose—and recommends repeatedly asking “so what?” for each section: what is the main point the reader should leave with. AI can help summarize and simplify drafts, and can generate opposing arguments to strengthen critical coherence, but those claims must be checked and cited properly. The result is a literature review that is faster to produce and easier to evaluate because it stays focused, structured, and evidence-backed.

Cornell Notes

The transcript lays out a four-step AI workflow to speed up literature reviews while reducing plagiarism and accuracy risks. It starts by narrowing the search to recent work (often within five years) and applying exclusion criteria so scanning titles/abstracts yields about 50 relevant papers. Reading becomes faster when each paper is approached with a clear purpose—using tools like Scispace and Avidnote to summarize PDFs, extract targeted details, and answer document-specific questions. Finally, the review is structured with a “pyramid” (apex = key takeaway/aim; steps = main topics; then subtopics and paragraph claims), and writing is tightened to avoid “waffling” through AI summarization and opposing-argument generation—always verified and properly cited.

How can researchers avoid plagiarism and accuracy problems when using AI for literature reviews?

Three safeguards are emphasized: (1) check university and journal rules, since some institutions ban or restrict AI in research; (2) treat AI outputs as potentially unreliable because hallucinations can produce fake information or references (ChatGPT is cited as a prime example), so verification against original sources is required; and (3) never generate entire text with AI—use AI for brainstorming, structuring, and improving drafts rather than producing full passages.

What is the “magic number” of papers, and how is it reached?

After limiting the scope to recent literature (recommended: last five years) and applying exclusion criteria—publication type, study type/methodology, and context such as region and participants—the workflow suggests scanning titles and abstracts to quickly eliminate irrelevant work. That process should leave roughly 50 papers for a strong literature review without overwhelming the reader.

What’s the biggest reading mistake, and how does purpose-driven reading fix it?

The common mistake is reading full papers simply because they appeared in search results. The fix is to ask why each paper is being read—e.g., to find suggestions for future research—then go directly to the most likely sections (often the discussion or conclusion) instead of reading everything end-to-end. AI then supports this targeted approach by extracting specific information quickly.

How do Scispace and Avidnote speed up reading in practice?

Scispace can summarize papers after either uploading PDFs or searching for papers, presenting information in adjustable columns (for example, adding a column for dependent variables and deleting others). It also supports “chat with document” to answer questions and extract contributions in bullet points. Avidnote works by reading uploaded PDFs so the information comes directly from the document, then suggesting questions (including specific ones like how many language teachers were included in a 2012 study). Both require verification and citation.

How does the “pyramid” method structure a literature review?

The pyramid’s apex is the key takeaway message—typically the aim of the study. Main sections (“steps”) are derived from key terms in that aim and ordered from general to specific. Each main topic is broken into subtopics, again ordered from general to specific, and each subtopic is supported by paragraph-level claims backed by examples and facts from the literature.

What does “waffling” mean in this context, and how can AI help reduce it?

Waffling is writing at great length without adding useful information—described as high in “calories” (words) but low in “nutrients” (content). The remedy is to build a coherent critical argument by repeatedly asking “so what?” for each section: what is the main point the reader should take away. AI can summarize or simplify selected text (e.g., using “summarize” commands) and can generate opposing arguments to strengthen critical coherence, but outputs must be verified and references must be added by the researcher.

Review Questions

  1. What exclusion criteria would you apply to narrow a literature search to about 50 papers, and why?
  2. How would you use the pyramid apex/steps approach to turn your study aim into main sections, subtopics, and paragraph claims?
  3. Where in a paper should you look when your purpose is to find suggestions for future research, and how does that change your reading strategy?

Key Points

  1. 1

    Check university and journal guidelines before using AI in research, since some rules restrict or ban AI use.

  2. 2

    Verify AI-generated information to prevent hallucinations, including fake references and inaccurate claims.

  3. 3

    Use AI to brainstorm structure and extract targeted information, not to generate entire passages that could trigger plagiarism concerns.

  4. 4

    Narrow the paper set by focusing on recent work (often last five years) and applying exclusion criteria, then scan titles/abstracts to reach about 50 papers.

  5. 5

    Read with a clear purpose for each paper and jump to the most relevant sections (often discussion/conclusion for future research suggestions).

  6. 6

    Use a pyramid structure for the literature review: apex = key takeaway/aim; steps = main topics; then subtopics and evidence-backed paragraphs.

  7. 7

    Avoid “waffling” by tightening drafts around the “so what?” takeaway per section, using AI summarization and opposing-argument generation—while verifying and citing everything.

Highlights

A literature review becomes faster when each paper is read for a specific purpose, not because it appeared in search results.
The workflow targets about 50 papers by combining recent-time filtering with exclusion criteria and title/abstract scanning.
The “pyramid” method anchors structure in the study aim (apex) and builds outward from general to specific topics and subtopics.
Scispace and Avidnote speed extraction by summarizing PDFs and enabling document-specific Q&A, but both require verification.
“Waffling” is treated as the enemy: long, low-information writing is replaced with critical, coherent argument supported by evidence.

Topics