Get AI summaries of any video or article — Sign up free
New mind-blowing AI research tool makes systematic literature review easy thumbnail

New mind-blowing AI research tool makes systematic literature review easy

Academic English Now·
5 min read

Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Scispace’s deep review targets Prisma-aligned systematic literature reviews and can generate a draft workflow in minutes rather than weeks or months.

Briefing

A new tool from Scispace, called “deep review,” aims to compress a Prisma-guideline systematic literature review from weeks or months into roughly minutes. The workflow centers on running a “systematic literature review” task that opens a new window with a ready-to-edit prompt, then generating Prisma-aligned outputs—research question framing, protocol criteria, screening steps, and structured reporting—at a speed that would be hard to match manually.

The process starts with tailoring the prompt. Users replace the default topic with a specific research focus (the example shifts from microplastic contamination to professional discrimination against non-native English speaker teachers). They also set the time window, which defaults to the last 10 years but can be extended to 20 years. The transcript adds practical guidance for choosing dates: check whether similar systematic reviews already exist and whether recent work has appeared; if the topic is fast-moving (like AI), narrower windows such as the last 3–5 years may be more appropriate.

Next comes database selection, which directly shapes the Prisma flow. The example keeps Web of Science, removes PubMed for non-medical relevance, retains Scopus, and adds ERIC to target education research. Users can also include regional databases. The tool is configured to produce key Prisma artifacts such as a PRISMA flowchart and a risk-of-bias table, while allowing further scoping via the prompt—such as limiting to T-OLE programs (teaching English as a second language programs).

A major selling point is transparency and replicability. The system shows step-by-step actions aligned with Prisma: interpreting the research question, finalizing inclusion criteria, searching multiple academic databases, screening titles and abstracts, and applying exclusion criteria. It also surfaces the search strategy—keywords and search strings—so researchers can rerun the same queries themselves to verify results, check for missed papers, and reproduce the search process across databases.

As the search runs, results appear incrementally, with downloadable CSV files for each database and additional files that reflect the tool’s own review database. Users can compare papers by adjusting columns, and they can download documents such as the criteria review already formatted. The tool also handles labor-intensive tasks that commonly consume time in traditional reviews: downloading full text, deduplicating overlapping records from multiple databases, and extracting relevant data.

Finally, Scispace generates a structured report in LaTeX form and then compiles it into a single PDF. The output includes standard systematic review sections: background and rationale, significance, prior research, research questions, methodology (protocol, inclusion/exclusion criteria, search strategy, study selection process, and study characteristics), results organized by themes (e.g., cultural/ideological factors and organizational policies), and a conclusion with implications and recommendations for future research. The transcript emphasizes a key limitation: AI-generated reviews still need a novel contribution to be publishable, and it points to a follow-up video on finding high-impact research topics for publication.

Cornell Notes

Scispace’s “deep review” is designed to generate Prisma-guideline systematic literature reviews in minutes by automating the full pipeline: research-question framing, protocol criteria, multi-database searching, screening, deduplication, data extraction, and report writing. Users customize the topic, time frame, and databases (e.g., Web of Science, Scopus, and ERIC for education) and can narrow scope with details like educational context. A standout feature is transparency: the tool displays the steps it follows and provides search keywords/search strings, enabling manual verification and replication. Outputs include Prisma artifacts such as a PRISMA flowchart and a risk-of-bias table, plus downloadable CSV results and a compiled PDF report. The transcript stresses that speed doesn’t replace the need for a novel contribution to publish.

How does Scispace’s deep review help researchers meet Prisma requirements faster than manual workflows?

It automates the Prisma-aligned pipeline: it formulates a research question, finalizes protocol criteria (including inclusion and exclusion criteria), runs searches across multiple academic databases, performs title/abstract screening, and then proceeds through study selection and data extraction. It also generates Prisma-specific deliverables such as a PRISMA flowchart and a risk-of-bias table, and it produces a structured report that can be compiled into a PDF.

What customization steps matter most when setting up a systematic literature review run?

The transcript highlights three main prompt edits: (1) replace the default topic with a more specific research focus, (2) set the review time frame (defaulting to the last 10 years, with options to extend to 20 years), and (3) choose databases relevant to the field. It also notes that narrowing educational context (for example, limiting to T-OLE programs) can sharpen the scope.

How should researchers decide the time window for a systematic review?

The transcript suggests checking whether similar systematic literature reviews already exist and when they were published. If prior reviews are older and little new work appeared since then, a narrower window may be appropriate; if no systematic review exists, researchers might extend further back. For fast-moving areas like AI, the guidance is to limit to more recent years (e.g., last 3–5 years) because earlier work may be sparse or outdated.

Why does database selection change the quality and completeness of the review?

Database choice determines what literature is discoverable. The example keeps Web of Science and Scopus, removes PubMed for a non-medical topic, and adds ERIC to capture education-specific research. Including the right databases helps ensure the search strategy reflects the domain, while Prisma outputs and screening steps remain consistent with the chosen sources.

What makes the tool’s results easier to verify and replicate manually?

Transparency. The system shows the steps it follows—understanding the research question, finalizing criteria, searching databases, and screening—and it provides the search strategy details: keywords and search strings. With those strings, researchers can rerun the same searches in the same or different databases to compare results and potentially uncover papers the automated run missed.

What kinds of files and outputs does deep review generate during and after the run?

During the run, it produces results that can be viewed and downloaded, including CSV files for each database and additional files related to its review database. It also provides documents like the criteria review. After screening and extraction, it generates a report in LaTeX structure and then compiles it into a downloadable PDF, with sections such as background, methodology, results organized by themes, conclusions, implications, and recommendations.

Review Questions

  1. When choosing a time frame for a systematic review, what checks should be done first according to the transcript?
  2. Which database changes were made in the example, and what field-specific rationale was given for adding or removing them?
  3. What transparency elements (steps and search details) enable manual verification of the automated review process?

Key Points

  1. 1

    Scispace’s deep review targets Prisma-aligned systematic literature reviews and can generate a draft workflow in minutes rather than weeks or months.

  2. 2

    Users must customize the research topic, time window, and database list; the transcript’s example uses Web of Science, Scopus, and ERIC while dropping PubMed for a non-medical topic.

  3. 3

    Time-frame selection should reflect prior systematic reviews and how quickly the field evolves; AI research may require a much narrower window than older domains.

  4. 4

    The tool emphasizes transparency by showing step-by-step actions and providing search keywords/search strings, enabling replication and manual verification.

  5. 5

    Downloadable outputs like per-database CSV files support checking and comparing retrieved studies, while Prisma artifacts like a PRISMA flowchart and risk-of-bias table are generated.

  6. 6

    Automation reduces common bottlenecks—full-text downloading, deduplication, and data extraction—but publication still requires a novel contribution to the field.

Highlights

Deep review is positioned as a Prisma-compliant systematic review generator that compresses the process into about five minutes by automating screening, extraction, and reporting.
Search strategy transparency (keywords and search strings) is treated as a verification feature, letting researchers rerun queries to confirm or extend results.
The workflow produces Prisma deliverables—like a PRISMA flowchart and risk-of-bias table—alongside downloadable CSV results and a compiled PDF report.

Topics

Mentioned