Essential AI Research Assistants: Tools Transforming Research Today!
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Consensus turns a research question into a consensus summary plus a list of supporting papers, including a “consensus meter” to show agreement strength.
Briefing
AI research assistants are most useful when they match how researchers actually work: starting with a question, grounding it in peer-reviewed evidence, then expanding outward through related literature. The strongest “first stop” tools in the transcript fall into three flavors—semantic search for specific answers, broad web-backed exploration when the question is fuzzy, and literature-management assistants that let researchers interrogate PDFs they already collected.
For semantic search that turns a research question into a paper-backed answer, Consensus is positioned as a fast way to get a summary of the prevailing consensus plus a list of supporting papers. It also includes a “consensus meter,” giving a quick sense of how strongly the literature aligns. Illicit is presented as a similar workflow focused on academic, peer-reviewed sources: enter a research question, get a literature-derived response, and then scroll through the underlying papers. The transcript even uses an odd example—whether the smell of feet, especially stress-induced odor, is adverse to animals—to show that these systems can still return a scientific answer and trace it back to papers.
When researchers already have a direction but want a more all-in-one assistant, Elicit is highlighted as doing “almost everything” in this category. In the example workflow, a query such as whether “beards” are better for health yields a set of papers, an insight summary, and AI-generated mini-summaries in a table format. The emphasis is on speed: instead of manually scanning dozens of studies, researchers can quickly identify what each paper claims and then decide what to read in full.
For broader starting points—especially when a question isn’t strictly academic—Perplexity is recommended. It searches the web and returns answers with references, mixing sources that can include academic papers alongside websites, YouTube, and Reddit. A key feature described is “Pro search,” which first interprets what the question means (e.g., clarifying the type of relaxation sought) and then gathers a wide set of sources, including peer-reviewed material.
Once researchers have a library of PDFs, the transcript shifts to tools that turn stored papers into an interactive research workspace. Argo (described as relatively new) lets users upload and paste PDFs, then generates section-by-section summaries, key insights, and related papers. It also offers deeper modes such as critique—surfacing premises, assumptions, and background context—and a chat interface where users can select text for explanations or ask questions about the document.
Finally, the transcript points to Connected Papers as a way to map a field from a seed paper. Using “derivative works,” it builds a network view showing who built on the original work over time, enabling researchers to jump to newer studies and then open them for reading. The overall message: the best assistant depends on whether the researcher needs consensus grounding, broad discovery, PDF-level interrogation, or citation-network expansion.
Cornell Notes
The transcript recommends AI research assistants in four practical stages of doing research: (1) semantic search for paper-backed answers, (2) broad web exploration when the question is unclear, (3) PDF-based assistants that summarize and critique papers already collected, and (4) citation-network mapping from a seed paper. Consensus and Illicit focus on turning a research question into summaries plus lists of relevant academic papers, with Consensus adding a “consensus meter.” Elicit is positioned as an all-in-one semantic search tool that returns paper sets, insights, and AI-generated summaries in a table. Perplexity’s Pro search broadens discovery by searching across web sources while still including references to peer-reviewed work. Argo and Connected Papers then help researchers expand and interrogate their existing literature through interactive summaries/critique and derivative-work maps.
How do Consensus, Illicit, and Elicit differ when the goal is to answer a research question using peer-reviewed literature?
What role does Perplexity play when a researcher doesn’t know where to start or the question is more general than academic?
How does Argo help researchers move beyond reading PDFs manually?
What does “related papers” and “review articles” add to the literature-expansion workflow in Argo?
How does Connected Papers support discovering newer work from an older “seed” paper?
Review Questions
- When would a researcher choose Consensus or Illicit over Elicit, based on the transcript’s described outputs?
- What specific feature of Perplexity’s Pro search helps when a question is ambiguous?
- How do Argo’s critique and chat-with-paper functions change the way a researcher interacts with a PDF?
Key Points
- 1
Consensus turns a research question into a consensus summary plus a list of supporting papers, including a “consensus meter” to show agreement strength.
- 2
Illicit provides a question-to-peer-reviewed-papers workflow, returning literature-derived answers that can be traced by scrolling through the cited papers.
- 3
Elicit speeds up literature review by returning a set of papers, an insight summary, and AI-generated per-paper mini-summaries in a table.
- 4
Perplexity’s Pro search broadens discovery by searching across web sources while still providing references that can include peer-reviewed work.
- 5
Argo converts an uploaded PDF library into an interactive research workspace with section summaries, key insights, critique (premises/assumptions), and a chat interface for targeted questions.
- 6
Connected Papers helps researchers expand from a seed paper by mapping derivative works, making it easier to find newer studies that built on earlier results.