Get AI summaries of any video or article — Sign up free
Essential AI Research Assistants: Tools Transforming Research Today! thumbnail

Essential AI Research Assistants: Tools Transforming Research Today!

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Consensus turns a research question into a consensus summary plus a list of supporting papers, including a “consensus meter” to show agreement strength.

Briefing

AI research assistants are most useful when they match how researchers actually work: starting with a question, grounding it in peer-reviewed evidence, then expanding outward through related literature. The strongest “first stop” tools in the transcript fall into three flavors—semantic search for specific answers, broad web-backed exploration when the question is fuzzy, and literature-management assistants that let researchers interrogate PDFs they already collected.

For semantic search that turns a research question into a paper-backed answer, Consensus is positioned as a fast way to get a summary of the prevailing consensus plus a list of supporting papers. It also includes a “consensus meter,” giving a quick sense of how strongly the literature aligns. Illicit is presented as a similar workflow focused on academic, peer-reviewed sources: enter a research question, get a literature-derived response, and then scroll through the underlying papers. The transcript even uses an odd example—whether the smell of feet, especially stress-induced odor, is adverse to animals—to show that these systems can still return a scientific answer and trace it back to papers.

When researchers already have a direction but want a more all-in-one assistant, Elicit is highlighted as doing “almost everything” in this category. In the example workflow, a query such as whether “beards” are better for health yields a set of papers, an insight summary, and AI-generated mini-summaries in a table format. The emphasis is on speed: instead of manually scanning dozens of studies, researchers can quickly identify what each paper claims and then decide what to read in full.

For broader starting points—especially when a question isn’t strictly academic—Perplexity is recommended. It searches the web and returns answers with references, mixing sources that can include academic papers alongside websites, YouTube, and Reddit. A key feature described is “Pro search,” which first interprets what the question means (e.g., clarifying the type of relaxation sought) and then gathers a wide set of sources, including peer-reviewed material.

Once researchers have a library of PDFs, the transcript shifts to tools that turn stored papers into an interactive research workspace. Argo (described as relatively new) lets users upload and paste PDFs, then generates section-by-section summaries, key insights, and related papers. It also offers deeper modes such as critique—surfacing premises, assumptions, and background context—and a chat interface where users can select text for explanations or ask questions about the document.

Finally, the transcript points to Connected Papers as a way to map a field from a seed paper. Using “derivative works,” it builds a network view showing who built on the original work over time, enabling researchers to jump to newer studies and then open them for reading. The overall message: the best assistant depends on whether the researcher needs consensus grounding, broad discovery, PDF-level interrogation, or citation-network expansion.

Cornell Notes

The transcript recommends AI research assistants in four practical stages of doing research: (1) semantic search for paper-backed answers, (2) broad web exploration when the question is unclear, (3) PDF-based assistants that summarize and critique papers already collected, and (4) citation-network mapping from a seed paper. Consensus and Illicit focus on turning a research question into summaries plus lists of relevant academic papers, with Consensus adding a “consensus meter.” Elicit is positioned as an all-in-one semantic search tool that returns paper sets, insights, and AI-generated summaries in a table. Perplexity’s Pro search broadens discovery by searching across web sources while still including references to peer-reviewed work. Argo and Connected Papers then help researchers expand and interrogate their existing literature through interactive summaries/critique and derivative-work maps.

How do Consensus, Illicit, and Elicit differ when the goal is to answer a research question using peer-reviewed literature?

Consensus takes a research question and returns an AI-generated summary of the prevailing consensus plus a list of supporting papers, along with a “consensus meter” to gauge agreement in the literature. Illicit follows a similar question-to-papers workflow but emphasizes finding academic, peer-reviewed sources and returning a literature-derived answer you can scroll through to see the papers. Elicit is framed as more comprehensive: it searches the literature, returns a set of papers (e.g., five in the example), provides an insight summary, and shows AI-generated mini-summaries in a table so researchers can compare studies quickly.

What role does Perplexity play when a researcher doesn’t know where to start or the question is more general than academic?

Perplexity is described as a broad starting point that searches the web and returns answers with references. Its Pro search first clarifies what the question means (for example, asking what type of relaxation is intended) and then gathers sources across categories such as websites, YouTube, and Reddit, while also including peer-reviewed papers among the references.

How does Argo help researchers move beyond reading PDFs manually?

Argo supports uploading or pasting PDFs into a literature workspace. It then generates summaries by section (e.g., introduction, experimental) and provides key insights for each part. It also offers a critique view that surfaces premises, assumptions, and contextual background—useful for building a researcher’s own “cheat sheet” of how a paper’s arguments are constructed. A chat interface lets users select text for explanations or ask questions about the paper.

What does “related papers” and “review articles” add to the literature-expansion workflow in Argo?

Within Argo, each paper can surface related papers, and the system can sort recommendations by how they relate—such as what a paper cited, who cited it, or whether it’s a survey/review. The transcript emphasizes review articles because they provide a field snapshot and reduce the confusion that can come from diving into many individual papers early.

How does Connected Papers support discovering newer work from an older “seed” paper?

Connected Papers builds a map from a seed paper and focuses on derivative works. The transcript describes using the “derivative works” view to see who built on the original research, including papers from different years (e.g., showing a 2023 paper alongside older ones). Researchers can then open those recommended papers and read forward from the original study.

Review Questions

  1. When would a researcher choose Consensus or Illicit over Elicit, based on the transcript’s described outputs?
  2. What specific feature of Perplexity’s Pro search helps when a question is ambiguous?
  3. How do Argo’s critique and chat-with-paper functions change the way a researcher interacts with a PDF?

Key Points

  1. 1

    Consensus turns a research question into a consensus summary plus a list of supporting papers, including a “consensus meter” to show agreement strength.

  2. 2

    Illicit provides a question-to-peer-reviewed-papers workflow, returning literature-derived answers that can be traced by scrolling through the cited papers.

  3. 3

    Elicit speeds up literature review by returning a set of papers, an insight summary, and AI-generated per-paper mini-summaries in a table.

  4. 4

    Perplexity’s Pro search broadens discovery by searching across web sources while still providing references that can include peer-reviewed work.

  5. 5

    Argo converts an uploaded PDF library into an interactive research workspace with section summaries, key insights, critique (premises/assumptions), and a chat interface for targeted questions.

  6. 6

    Connected Papers helps researchers expand from a seed paper by mapping derivative works, making it easier to find newer studies that built on earlier results.

Highlights

Consensus pairs an AI consensus summary with a “consensus meter” and a paper list, making it easier to judge how settled a topic is.
Perplexity’s Pro search clarifies ambiguous questions before searching and returns answers with references spanning web sources and peer-reviewed papers.
Argo’s critique view is designed to surface premises, assumptions, and background context—turning reading into argument analysis.
Connected Papers’ derivative-works map helps researchers jump from an older seed paper to newer follow-on studies quickly.