Get AI summaries of any video or article — Sign up free
How to do Literature Review using AI Tool | Step-by-step Demo thumbnail

How to do Literature Review using AI Tool | Step-by-step Demo

5 min read

Based on WiseUp Communications's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start with a tightly defined research question that specifies population, outcomes, and time horizon so AI search returns relevant evidence.

Briefing

Modern literature reviews no longer start with downloading dozens of PDFs and reading them one by one. The core shift is toward a structured workflow: clarify the topic precisely, search strategically, extract and compare themes across studies, and then pinpoint what’s missing so a research gap—and a defensible research direction—emerges faster and with less overwhelm.

The process begins with tightening the research question. Keyword-only searching is treated as outdated because AI search engines respond to context. Instead of a broad prompt like “social media and mental health,” the workflow pushes for specificity: which population is being studied (adolescents versus working professionals), what outcome matters (anxiety, depression, academic performance), and whether the focus is on short-term effects or long-term impacts. That added clarity is presented as the difference between getting relevant results and wasting time on irrelevant papers.

With a refined question in hand, the next step is literature discovery using an AI search tool—Consensus—rather than relying solely on Google Scholar. On Consensus, the question is run through large collections of papers to produce an evidence-based answer. The demo uses the query “Does social media impact mental health of adolescents?” and selects “pro search,” then applies filters such as “past 10 years” for recency and journal rank categories (Q1–Q3) for quality. The output includes an immediate synthesized conclusion: effects are described as generally small, mixed, and dependent on how social media is used. It also breaks down the evidence count (e.g., studies finding positive links, mixed results, and null effects), plus a “consensus meter” that signals whether the field is converging, diverging, or split.

Beyond the headline summary, Consensus provides citations and indicates that full text was referenced, which is framed as building confidence that the synthesis is grounded in the literature. It also lists the underlying papers, allowing the user to move from a field-level overview to targeted reading.

After shortlisting, the workflow emphasizes deeper engagement with individual studies. Consensus tags help prioritize high-impact work such as meta-analyses and highly cited papers, and an “open access” option flags free full texts. A distinctive feature is the ability to attach a PDF and ask questions directly to that document, turning paper-by-paper note-taking into a more interactive Q&A process.

The most time-consuming phase—organizing studies and identifying patterns—is then streamlined through two Consensus capabilities: a “consensus library” for uploading downloaded papers into folders, and Zotero integration for importing from a reference manager. With studies stored in a library, comparisons become possible by querying across multiple sources at once. In the demo, a request for a table of social media channels used by adolescents yields a side-by-side list including TikTok, YouTube, Instagram, Snapchat, Pinterest, Twitter, Facebook, WhatsApp, and others—an approach meant to surface themes, trends, and gaps.

Finally, the workflow turns analysis into research positioning. Once the literature’s findings, methods, and omissions are clear, the next step is to define a hypothesis and research direction that addresses the gap—such as expanding the population, switching from surveys to more rigorous experimental designs, or improving methodology and controls. The guidance is to use AI for exploration and organization, while reserving judgment and critical thinking for the researcher. The end goal is a literature review that does more than summarize prior work: it builds a clear, logical case for original contribution.

Cornell Notes

A modern literature review starts by sharpening the research question with context, not just keywords. Using Consensus, the workflow turns a focused question into an evidence-based synthesis, complete with a consensus meter, citations, and a list of supporting papers. After shortlisting key studies (e.g., meta-analyses, highly cited work, open-access papers), the researcher can query individual PDFs or ask questions across a stored library. Consensus library uploads and Zotero integration help organize papers so comparisons can be generated as tables and theme summaries. The final step is to identify what remains unclear and translate that gap into a hypothesis and study design—while keeping critical judgment firmly in the researcher’s hands.

Why does the workflow insist on clarifying the topic before searching?

AI search results are treated as context-sensitive, so vague prompts produce noisy outputs. The demo contrasts a broad query (“social media and mental health”) with a structured version that specifies population (adolescents vs working professionals), outcomes (anxiety, depression, academic performance), and timing (short-term vs long-term effects). That specificity is presented as the way to define the problem more precisely, reduce irrelevant papers, and speed up the review.

How does Consensus turn a research question into a usable literature overview?

Consensus runs the question against large sets of papers and returns an immediate synthesized answer plus supporting evidence counts. In the example (“Does social media impact mental health of adolescents?”), the synthesis describes effects as generally small, mixed, and dependent on how social media is used. It also breaks down how many studies support, contradict, or complicate the claim, and provides citations and indications that full text was referenced, along with a list of source papers for deeper reading.

What do the filters and “consensus meter” contribute to the search process?

Filters narrow the evidence base to what matters for the review. The demo applies “past 10 years” for recency and journal rank categories (Q1–Q3) to prioritize higher-quality publication venues. The consensus meter then communicates the field’s balance—whether most studies agree, whether results are mixed, or whether evidence leans toward no effect—so the researcher can decide what to read next and where uncertainty remains.

How does the workflow move from reading individual papers to comparing studies?

After shortlisting, the researcher uses Consensus tags to identify high-priority work (e.g., meta-analysis, rigorous journal, highly cited) and can access open-access PDFs. For deeper understanding, the workflow supports attaching a PDF and asking targeted questions about that specific study. Then, instead of manual spreadsheets, papers are organized into a Consensus library or imported via Zotero integration, enabling cross-paper queries that generate comparative outputs like tables.

How does Zotero integration change the organization step?

If papers are already saved in Zotero, the workflow avoids re-uploading them manually. The demo describes creating a Zotero key, pasting it into Consensus, and importing papers into the platform. Once imported, the library becomes the source for cross-study questions—such as generating a table of social media channels used by adolescents (e.g., TikTok, YouTube, Instagram, Snapchat, Pinterest, Twitter, Facebook, WhatsApp).

What turns “gap spotting” into a research-ready hypothesis?

The workflow treats the gap as the missing piece revealed by targeted questions across the library: what remains unclear, which channels affect mental health, which do not, and what populations or methods are underrepresented. Once those omissions are articulated, the researcher defines how the new study will address them—such as expanding the population, moving from surveys to experimental designs, or using improved controls—so the literature review becomes a defensible argument for novelty.

Review Questions

  1. How would you rewrite a vague topic into a context-rich research question that an AI search tool can handle effectively?
  2. What evidence elements from Consensus output (e.g., consensus meter, citations, full-text reference indicators) would you use to justify the synthesis in your own literature review?
  3. Describe two ways the workflow reduces manual effort when organizing and comparing studies (e.g., library uploads, Zotero integration, cross-paper querying).

Key Points

  1. 1

    Start with a tightly defined research question that specifies population, outcomes, and time horizon so AI search returns relevant evidence.

  2. 2

    Use Consensus with targeted filters (such as past 10 years and Q1–Q3 journal rank) to narrow results to recent, higher-quality studies.

  3. 3

    Rely on the consensus meter and evidence counts to understand whether findings converge, conflict, or remain mixed across studies.

  4. 4

    Shortlist high-impact papers using tags like meta-analysis, rigorous journal, and highly cited, and prioritize open-access when possible.

  5. 5

    Ask questions directly to individual PDFs after attaching them, then shift to cross-paper comparisons using a library.

  6. 6

    Organize studies through Consensus library uploads or Zotero integration to enable table-based comparisons and theme detection.

  7. 7

    Translate identified omissions into a clear hypothesis and study design, while keeping critical judgment and reasoning with the researcher.

Highlights

Consensus provides an immediate synthesized answer plus a consensus meter that quantifies how many studies support, contradict, or complicate a claim.
Filters like “past 10 years” and Q1–Q3 journal rank help prevent the review from being driven by outdated or lower-quality evidence.
Cross-paper querying from a Consensus library can generate comparative tables (e.g., listing adolescent-used social media channels) that reveal patterns faster than manual spreadsheets.
The workflow’s final step links gap identification directly to hypothesis design—turning the literature review into a defensible case for novelty.