Get AI summaries of any video or article — Sign up free
This is How Top Researchers Are Using AnswerThis (Safely) thumbnail

This is How Top Researchers Are Using AnswerThis (Safely)

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AnswerThis supports two primary research entry points: quick Q&A for fast, filtered answers and full literature review mode for a configurable, sectioned draft with citations.

Briefing

AnswerThis is positioned as an all-in-one research assistant that can move a researcher from a single question to a structured literature review draft—then further into analysis and writing support—without forcing users to juggle multiple separate tools. The core value is the workflow: start with a quick Q&A for fast, filtered answers, or run a full literature review that generates an in-depth, sectioned draft with citations, tables, and a “canvas” where follow-up questions and downstream tasks can be chained.

For quick inquiries, AnswerThis offers a rapid Q&A that returns a simple text response and a more research-oriented “canvas” view. The key differentiator is the paper filter: users can constrain results by citation count, journal quality, publication type, and publication date ranges (start and end dates). In the example question about how exfoliants enhance skin texture, the output is presented in a referenced, scrollable format that links information directly to citations, making it easier to verify claims and drill into sources.

For deeper work, AnswerThis defaults to a “full review” mode that can be configured with literature review settings such as the number of main sections, sub-points per section, topics to cover, minimum citations (raised in the demo to 20), and journal quality (set to Q2 in the example). After roughly five minutes, it produces a detailed first draft of a literature review in a single-column layout, complete with tables and a reference list. A notable detail: the system may identify hundreds of papers (e.g., 372 found) but include a smaller subset in the draft (e.g., 14 included), which frames the output as a curated starting point rather than an exhaustive bibliography.

The real leverage comes from the canvas workflow that sits under the generated review. From there, users can ask follow-up questions—such as “What are the research gaps?”—and receive targeted answers with additional resources. A “notebook” feature then turns the generated material into an editable document (similar to Google Docs), allowing formatting, highlighting, and exporting, while saving drafts into a personal library.

Beyond writing, the canvas supports multiple research operations without leaving the workspace. Users can “chat with papers” by selecting specific papers from the table, optionally filtering by date (e.g., selecting papers from 2020), then asking iterative questions like main findings or limitations. They can also create new tables that extract specific fields per paper—adding columns such as research gaps or future work, and even inserting custom prompts for what to extract from each abstract.

AnswerThis also includes biblometric analysis for a visual overview of a set of papers, showing publications by year, citations by year, combined publication/citation metrics, citation impact, word clouds, top terms, and top authors—useful for a fast “lay of the land” check. Additional capabilities include searching for new papers via a fresh prompt (example: latest treatments for Alzheimer’s disease) and citation mapping and diagram-style outputs like mind maps and user-journey diagrams. Some agent features are marked “coming soon,” but the workflow already supports a full loop: question → curated review draft → extraction/analysis → notebook-ready writing support—aimed at making academic research and drafting safer and more efficient.

Cornell Notes

AnswerThis is presented as a research workflow tool that can take a user from an initial question to a structured literature review draft with citations, then extend that work through analysis and writing support. It offers two main entry points: quick Q&A with strong paper filters (citations, journal quality, date ranges) and a full literature review mode with configurable section structure and citation thresholds. The “canvas” is where follow-up questions, table creation, and paper-level Q&A happen, while the “notebook” turns outputs into an editable, exportable document saved in a library. Additional layers include “chat with papers,” custom extraction tables (e.g., research gaps, future work), and biblometric analysis with word clouds and top terms/authors. The practical takeaway is chaining outputs into a writing-ready process rather than treating results as a one-off answer.

How does AnswerThis’s quick Q&A differ from its full literature review workflow?

Quick Q&A is built for fast answers using a research paper filter. Users can constrain results by citation count, journal quality, publication type, and publication date ranges, then get a referenced response in a canvas-like layout. Full literature review mode is the default for deeper work: it adds literature review settings such as the number of main sections and sub-points, topics to cover, minimum citations (raised to 20 in the demo), and journal quality (set to Q2). After several minutes, it generates an in-depth, sectioned draft with tables and a reference list.

Why might a literature review include only a small number of citations even when hundreds of papers are found?

The system can identify a large pool of relevant papers (the demo cites 372 found) but include a smaller subset in the actual draft (14 included). That means the output functions as a curated first draft rather than an exhaustive literature inventory. The references remain available for deeper exploration, and users can further interrogate sources via the canvas tools.

What does the “canvas” enable that makes it more than a static draft?

The canvas acts as a workspace under the generated review. It supports follow-up questions (e.g., asking for research gaps), paper-level interaction (“chat with papers”), and creating new tables that extract specific information per paper. Importantly, these actions add to the canvas rather than forcing users into separate tools, enabling a chained workflow from synthesis to targeted extraction.

How does “chat with papers” work in practice?

Users select papers from the table and then open a “chat with papers” interface. The demo shows filtering by start date (e.g., selecting papers from 2020) and then asking iterative questions such as main findings and limitations. The workflow is designed for exploratory stages—answering questions while staying grounded in the selected set of papers.

What is the purpose of creating a new table inside the canvas?

A new table lets users refine the literature set and extract structured fields per paper. Each row corresponds to a paper, and users can add columns like “research gaps” or “future work.” The demo also highlights custom prompts for what to extract from each abstract, turning qualitative reading into a structured dataset for writing and comparison.

What does biblometric analysis add to the research workflow?

Biblometric analysis provides a visual and quantitative overview of a selected set of papers. It can show publications by year, citations by year, combined publication/citation metrics, citation impact, word clouds, top terms, and top authors. In the demo, selecting a subset (e.g., 11 papers) generates these summaries, helping researchers sanity-check whether key keywords, authors, and terms are missing before drafting or narrowing focus.

Review Questions

  1. When would a researcher choose quick Q&A over full literature review mode, and what filters matter most in each?
  2. How do notebook, canvas, and tables work together to turn citations into a writing-ready literature review?
  3. What kinds of structured outputs can be extracted per paper using the “create a new table” feature?

Key Points

  1. 1

    AnswerThis supports two primary research entry points: quick Q&A for fast, filtered answers and full literature review mode for a configurable, sectioned draft with citations.

  2. 2

    Paper filters in quick Q&A include citation count, journal quality, publication type, and publication date ranges, enabling targeted literature discovery.

  3. 3

    Full literature review settings let users control structure (main sections and sub-points), topic coverage, minimum citations, and journal quality before generating a draft.

  4. 4

    The canvas is the workflow hub: it enables follow-up questions, paper-level Q&A, and adding new tables without leaving the workspace.

  5. 5

    The notebook feature converts generated material into an editable, exportable document saved in a personal library, reducing the friction of copying text elsewhere.

  6. 6

    “Chat with papers” supports iterative questioning over a selected (and optionally date-filtered) set of papers, making early-stage exploration faster.

  7. 7

    Biblometric analysis adds a field-level snapshot using metrics and visuals like publications/citations by year, word clouds, and top authors/terms.

Highlights

Quick Q&A pairs a natural-language question with a citation- and date-filtered paper filter, producing referenced answers that can be drilled into.
Full review mode generates a structured literature review draft with tables and citations, even when it finds hundreds of papers but includes a curated subset in the draft.
The canvas turns one prompt into a chain of tasks—follow-up questions, paper chat, and custom extraction tables—while the notebook handles the writing.
Biblometric analysis provides a fast “lay of the land” check using publications/citations trends, citation impact, and word clouds/top terms/top authors.
Citation mapping and diagram-style outputs (like mind maps and user-journey diagrams) extend beyond text synthesis into visual research mapping.

Topics

Mentioned