This AI Tool Replaced Perplexity for My Research (Here’s Why)
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Consensus offers a Deep research mode that performs literature review steps like identification, screening, eligibility checking, and inclusion before generating a synthesis.
Briefing
Consensus is positioning itself as a faster, more source-grounded alternative to Perplexity for research workflows—especially after adding a “Deep” mode that runs a structured literature review across large paper sets and returns a publication-ready synthesis with traceable references.
The walkthrough highlights a revamped interface with new controls for research depth and follow-up interaction. Instead of only generating a standard consensus-style answer, users can switch to a Deep research option described as a comprehensive literature review of 50 papers. A sample question—whether content analysis can be used effectively in sociology research—demonstrates the workflow: select Deep, run the query, and watch the system progress through stages like screening and eligibility checks before producing a report built from included studies. The key promise is time savings: tasks that typically take hours—identifying relevant literature, screening it, and surfacing contrasting perspectives—are handled automatically, with visible progress while the review runs.
The transcript also quantifies the scale behind the scenes. For a separate example about excessive sodium consumption and hypertension, the sidebar reports that 947 papers were identified, screened, narrowed to eligible studies, and ultimately reduced to 50 included sources for the final report. The output then pairs a consensus meter with a reference-by-reference visual summary. The consensus meter shows how strongly the included papers align with the research question—reported as 96% “yes,” with “possibly/mixed” and no “no” in that example—turning a literature review into an at-a-glance evidence map.
Beyond the headline synthesis, the Deep output is presented as a structured research brief: an introduction, a color-coded reference list (green for agreement, mixed/unknown for uncertainty), a quick methods outline, key papers, discussion, a claims-and-evidence table, research gaps, and open research questions. The research gaps section is framed as especially actionable. A matrix links topics to where evidence is missing—for instance, indicating that longer-term randomized controlled trials on cardiovascular disease events in adolescence and older children are “unknown” in the literature—suggesting concrete angles for new studies.
The workflow extends past Deep mode. In Quick or Pro mode, Consensus still provides the familiar consensus meter and reference set, but adds a more conversational follow-up bar that lets users ask additional questions using the existing results as context. Other new utilities aim at writing and planning: “Draft and outline” can generate referenced outlines for tasks like blog posts (e.g., pros and cons of a carbon tax) and grant proposals (e.g., methane-eating microbes), including sections such as background, rationale, objectives, and research plans. A “create a table” feature supports comparison tables (example: 5G frequency bands), and the interface includes filters such as published year, journal rank (e.g., Q1), and options to exclude preprints.
Overall, the pitch is that Consensus delivers rigorous, peer-reviewed grounded outputs with exportable citations (RIS/AIS for Zotero) and reduced hallucination risk by extracting information directly from published research—making it a one-stop tool for evidence synthesis, gap spotting, and drafting.
Cornell Notes
Consensus adds a “Deep” research mode that performs a structured literature review at scale, then returns a synthesized report built from included studies. In an example, 947 papers were identified and screened down to 50 included sources, producing a color-coded evidence summary plus a consensus meter (e.g., 96% “yes” with no “no” in the sodium/hypertension case). The Deep output also includes methods, key papers, discussion, claims-and-evidence tables, and explicit research gaps tied to where evidence is missing. New Quick/Pro features add conversational follow-ups, while “Draft and outline” and “create a table” help turn findings into writing and structured comparisons with references attached.
What does “Deep” mode do differently from standard consensus-style results?
How large is the paper pool behind the final Deep report?
How does the consensus meter help interpret the evidence quickly?
What makes the research gaps section useful for planning new work?
How do follow-up questions work after generating an initial answer?
What writing and formatting tools are added beyond research synthesis?
Review Questions
- When using Deep mode, what stages of the literature review are performed before the final included set is produced?
- How would you use the consensus meter plus filtering to test whether a research question is supported versus mixed?
- What kinds of outputs besides evidence summaries can Consensus generate, and how are references handled in those outputs?
Key Points
- 1
Consensus offers a Deep research mode that performs literature review steps like identification, screening, eligibility checking, and inclusion before generating a synthesis.
- 2
Deep mode is described as producing reports from 50 included papers, while still searching a much larger pool (e.g., 947 identified down to 50 included).
- 3
A color-coded reference display plus a consensus meter turns evidence strength into a quick visual snapshot, including support vs mixed vs no.
- 4
The Deep output includes research gaps and open research questions, including a matrix that flags where evidence is “unknown” and suggests potential study directions.
- 5
New conversational follow-ups in Quick/Pro mode let users ask additional questions using the initial results as context, with options to go deeper.
- 6
“Draft and outline” generates referenced outlines for writing tasks like blog posts and grant proposals, reducing the blank-page problem.
- 7
Filtering options (published year, journal rank such as Q1, and excluding preprints) help users focus on higher-quality or more relevant sources.