Quick Guide to Using SciSpace for Literature Review (Agentic & Non-Agentic Methods)
Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Standard mode generates a short insight from top five (optionally top 10) relevance-ranked papers and supports add-column extraction for deeper per-paper comparison.
Briefing
SciSpace’s literature review tool splits the workflow into three modes—Standard, High Quality, and Deep Review—then contrasts that “non-agentic” setup with an AI agent that can actively reshape searches and generate fully customized reports. The biggest practical difference is that Deep Review expands coverage by transforming the original query into multiple search directions and then following citations and references, producing far more papers and a more comprehensive, report-style output.
In Standard mode, a user query returns a short insight built from the top five most relevant papers (with an option to expand to top 10). Results appear as a relevance-sorted list, and users can inspect cited papers and open each paper’s page. A key workflow feature is the ability to add columns to the results table—either predefined extraction fields or custom columns with instructions—so the system extracts additional information per paper and updates the table accordingly. Export options are available for the table and insights, but the overall structure stays fixed around relevance-sorted paper comparison.
High Quality mode keeps the same core layout and extraction features, but generates a longer, better-written insight by using a higher-capability model. The output becomes more detailed and easier to interpret when comparing criteria across papers, while still relying on the same basic query-to-results approach.
Deep Review is where the coverage jump becomes visible. Instead of using the query as-is, it asks follow-up questions when the request is underspecified, then constructs multiple transformed queries from the base question and the added parameters. Those queries run in parallel, yielding roughly 3x–4x more papers, and the example shown reaches about 1,750 papers versus far fewer in earlier modes. Deep Review then traverses citations and references from the newly found papers to discover additional relevant work, filters and reranks everything by relevance, and generates a large report with fixed sections such as abstract, introduction, scope, methodology, screening papers, results, and critical analysis. The section set is not meant to change per query, but the report is substantially more comprehensive than Standard or High Quality.
The non-agentic limitations become clearer when comparing report control and workflow flexibility. Standard/High Quality/Deep Review outputs are templated: users can’t request arbitrary file formats through instructions (beyond built-in export), can’t add or remove report sections, can’t target specific databases or journal sites via the report prompt, and can’t easily run follow-up tasks on top of prior results as part of the same instruction chain.
SciSpace’s agentic approach addresses those constraints by letting the system interpret detailed criteria and execute multi-step tasks. In the example, the agent searches PubMed for peer-reviewed research published after 2020, transforms the query into PubMed-compatible forms, runs multiple PubMed queries, combines and reranks results, and then produces a Word-formatted report with a target length (1500 words) and specific content requirements—strengths and limitations, diagnostic analysis, current consensus, open challenges, future research directions, and more. The agent also supports iterative refinement via follow-up questions, and it can export in common formats (including Word and PDF) and save papers into a SciSpace library with folder organization for later retrieval.
Cornell Notes
SciSpace’s literature review tool offers three non-agentic modes—Standard, High Quality, and Deep Review—then contrasts them with an agentic workflow that can execute multi-step, customizable tasks. Standard and High Quality generate relevance-sorted paper tables and insights from top results, with High Quality producing more detailed writing. Deep Review expands coverage by transforming the query into multiple search directions, running parallel searches, and then traversing citations and references to find additional papers; it can produce far larger paper sets and a comprehensive report with fixed sections. The agentic approach goes further: it can follow detailed inclusion criteria (e.g., PubMed, peer-reviewed, after 2020), generate a report in a requested format and length, and support iterative follow-ups to modify outputs. This matters because it reduces missed literature and increases control over scope, sources, and deliverables.
How do Standard and High Quality differ in SciSpace’s literature review outputs?
What makes Deep Review meaningfully different from Standard/High Quality?
Why does Deep Review reduce the risk of missing relevant papers?
What constraints limit non-agentic literature review reports?
How does the agentic workflow handle detailed research requirements and deliverables?
Review Questions
- When would you choose High Quality over Standard, and what changes in the output besides length?
- Describe the three expansion mechanisms Deep Review uses to increase coverage (query transformation, parallel searches, citation/reference traversal).
- What specific capabilities does the agentic approach add that non-agentic modes cannot provide (report customization, source targeting, follow-ups, or export control)?
Key Points
- 1
Standard mode generates a short insight from top five (optionally top 10) relevance-ranked papers and supports add-column extraction for deeper per-paper comparison.
- 2
High Quality keeps the same workflow structure but produces longer, more detailed insights by using a higher-capability model.
- 3
Deep Review expands coverage by transforming the query into multiple search directions, running parallel searches, and then traversing citations and references to find additional relevant papers.
- 4
Deep Review produces a comprehensive report with fixed sections, but it doesn’t support changing the section set based on prompt instructions.
- 5
Non-agentic modes are limited in workflow flexibility: they restrict prompt-driven customization such as arbitrary report structure changes, targeted source selection, and iterative follow-up tasks.
- 6
The agentic workflow can follow detailed inclusion criteria (e.g., PubMed, peer-reviewed, after 2020), generate a report in a requested format and length, and support iterative follow-up modifications.
- 7
After generating results, the agentic workflow can export deliverables and save papers into a SciSpace library with folder organization for later reuse.