Get AI summaries of any video or article — Sign up free
Quick Guide to Using SciSpace for Literature Review (Agentic & Non-Agentic Methods) thumbnail

Quick Guide to Using SciSpace for Literature Review (Agentic & Non-Agentic Methods)

SciSpace·
5 min read

Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Standard mode generates a short insight from top five (optionally top 10) relevance-ranked papers and supports add-column extraction for deeper per-paper comparison.

Briefing

SciSpace’s literature review tool splits the workflow into three modes—Standard, High Quality, and Deep Review—then contrasts that “non-agentic” setup with an AI agent that can actively reshape searches and generate fully customized reports. The biggest practical difference is that Deep Review expands coverage by transforming the original query into multiple search directions and then following citations and references, producing far more papers and a more comprehensive, report-style output.

In Standard mode, a user query returns a short insight built from the top five most relevant papers (with an option to expand to top 10). Results appear as a relevance-sorted list, and users can inspect cited papers and open each paper’s page. A key workflow feature is the ability to add columns to the results table—either predefined extraction fields or custom columns with instructions—so the system extracts additional information per paper and updates the table accordingly. Export options are available for the table and insights, but the overall structure stays fixed around relevance-sorted paper comparison.

High Quality mode keeps the same core layout and extraction features, but generates a longer, better-written insight by using a higher-capability model. The output becomes more detailed and easier to interpret when comparing criteria across papers, while still relying on the same basic query-to-results approach.

Deep Review is where the coverage jump becomes visible. Instead of using the query as-is, it asks follow-up questions when the request is underspecified, then constructs multiple transformed queries from the base question and the added parameters. Those queries run in parallel, yielding roughly 3x–4x more papers, and the example shown reaches about 1,750 papers versus far fewer in earlier modes. Deep Review then traverses citations and references from the newly found papers to discover additional relevant work, filters and reranks everything by relevance, and generates a large report with fixed sections such as abstract, introduction, scope, methodology, screening papers, results, and critical analysis. The section set is not meant to change per query, but the report is substantially more comprehensive than Standard or High Quality.

The non-agentic limitations become clearer when comparing report control and workflow flexibility. Standard/High Quality/Deep Review outputs are templated: users can’t request arbitrary file formats through instructions (beyond built-in export), can’t add or remove report sections, can’t target specific databases or journal sites via the report prompt, and can’t easily run follow-up tasks on top of prior results as part of the same instruction chain.

SciSpace’s agentic approach addresses those constraints by letting the system interpret detailed criteria and execute multi-step tasks. In the example, the agent searches PubMed for peer-reviewed research published after 2020, transforms the query into PubMed-compatible forms, runs multiple PubMed queries, combines and reranks results, and then produces a Word-formatted report with a target length (1500 words) and specific content requirements—strengths and limitations, diagnostic analysis, current consensus, open challenges, future research directions, and more. The agent also supports iterative refinement via follow-up questions, and it can export in common formats (including Word and PDF) and save papers into a SciSpace library with folder organization for later retrieval.

Cornell Notes

SciSpace’s literature review tool offers three non-agentic modes—Standard, High Quality, and Deep Review—then contrasts them with an agentic workflow that can execute multi-step, customizable tasks. Standard and High Quality generate relevance-sorted paper tables and insights from top results, with High Quality producing more detailed writing. Deep Review expands coverage by transforming the query into multiple search directions, running parallel searches, and then traversing citations and references to find additional papers; it can produce far larger paper sets and a comprehensive report with fixed sections. The agentic approach goes further: it can follow detailed inclusion criteria (e.g., PubMed, peer-reviewed, after 2020), generate a report in a requested format and length, and support iterative follow-ups to modify outputs. This matters because it reduces missed literature and increases control over scope, sources, and deliverables.

How do Standard and High Quality differ in SciSpace’s literature review outputs?

Both modes start from the same basic query-to-results flow: a short insight is generated from top papers (top five by default, with an option to expand to top 10). Results are relevance-sorted and can be explored via paper pages. The key difference is writing depth: High Quality produces a longer, better-written insight using a higher-capability model, making extracted criteria comparisons more detailed, while keeping the same general table/insight structure and the same add-column extraction workflow.

What makes Deep Review meaningfully different from Standard/High Quality?

Deep Review actively expands the search. It can ask follow-up questions when the query lacks detail, then constructs multiple transformed queries from the base question and included parameters. It runs parallel searches to find roughly 3x–4x more papers, and then traverses citations and references from the found papers to discover additional relevant work. After filtering and reranking by relevance, it generates a large report with comprehensive, fixed sections (e.g., abstract, introduction, scope, methodology, screening papers, results, critical analysis).

Why does Deep Review reduce the risk of missing relevant papers?

Because it doesn’t rely on a single query formulation. By transforming the query into multiple directions and following citations/references, it captures literature that might be missed when only one search path is used. In the example shown, the expanded coverage reaches about 1,750 papers, illustrating how coverage can grow far beyond the smaller sets typical of Standard/High Quality.

What constraints limit non-agentic literature review reports?

Non-agentic outputs are templated and constrained by built-in tooling. Users can’t instruct the system to produce arbitrary file formats through the prompt (beyond available export options), can’t add or remove report sections, can’t reliably target specific databases or journal sites via the report instruction, and can’t easily run follow-up tasks that modify the existing output based on new instructions.

How does the agentic workflow handle detailed research requirements and deliverables?

The agent interprets complex criteria and executes multi-step tasks. In the example, it searches PubMed for peer-reviewed research published after 2020, transforms the query into PubMed-compatible forms, runs multiple PubMed queries, combines and reranks results, and then writes a Word-formatted report with a target length (1500 words). It also includes requested content areas such as strengths and limitations, diagnostic analysis, current consensus, open challenges, and future research directions. It can further support iterative refinement through follow-up questions and export the final deliverable.

Review Questions

  1. When would you choose High Quality over Standard, and what changes in the output besides length?
  2. Describe the three expansion mechanisms Deep Review uses to increase coverage (query transformation, parallel searches, citation/reference traversal).
  3. What specific capabilities does the agentic approach add that non-agentic modes cannot provide (report customization, source targeting, follow-ups, or export control)?

Key Points

  1. 1

    Standard mode generates a short insight from top five (optionally top 10) relevance-ranked papers and supports add-column extraction for deeper per-paper comparison.

  2. 2

    High Quality keeps the same workflow structure but produces longer, more detailed insights by using a higher-capability model.

  3. 3

    Deep Review expands coverage by transforming the query into multiple search directions, running parallel searches, and then traversing citations and references to find additional relevant papers.

  4. 4

    Deep Review produces a comprehensive report with fixed sections, but it doesn’t support changing the section set based on prompt instructions.

  5. 5

    Non-agentic modes are limited in workflow flexibility: they restrict prompt-driven customization such as arbitrary report structure changes, targeted source selection, and iterative follow-up tasks.

  6. 6

    The agentic workflow can follow detailed inclusion criteria (e.g., PubMed, peer-reviewed, after 2020), generate a report in a requested format and length, and support iterative follow-up modifications.

  7. 7

    After generating results, the agentic workflow can export deliverables and save papers into a SciSpace library with folder organization for later reuse.

Highlights

Deep Review can transform one research question into multiple parallel searches and then expand further by walking through citations and references—leading to dramatically larger paper sets (e.g., ~1,750 papers in the example).
High Quality mainly improves the quality and depth of the generated insight while keeping the same overall table-and-extraction workflow.
Non-agentic literature review outputs are templated: users can’t add or remove report sections via instructions, limiting customization.
The agentic approach can execute source-specific searches (like PubMed), enforce constraints (peer-reviewed, after 2020), and produce a Word-formatted report with a target word count (1500).

Topics

  • Literature Review Modes
  • Deep Review Expansion
  • Agentic vs Non-Agentic
  • Report Customization
  • PubMed Search

Mentioned

  • Arpit
  • AI
  • MD
  • PDF
  • PubMed