Get AI summaries of any video or article — Sign up free
How to Use Elicit AI, Literature Reviews + More: Beginner Tutorial and Research Tips! thumbnail

How to Use Elicit AI, Literature Reviews + More: Beginner Tutorial and Research Tips!

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Elicit AI organizes research work into notebooks that keep questions, paper tables, extracted attributes, and follow-up steps in one place.

Briefing

Elicit AI is positioned as a fast, structured way to turn research questions into a navigable literature review—without spending hours manually hunting, reading, and extracting details from papers. After logging in, the workflow centers on “notebooks,” which act as containers for a research theme, keep questions and outputs organized in a sidebar, and can be auto-titled if the user skips naming. With a paid Pro plan, credits are higher, but most core features still work without Pro—just with fewer credits.

The most common starting point is using Elicit as a question-driven search engine. Instead of relying on keyword-only queries, the system works best with clear, unambiguous research questions. For example, rather than a vague term, the tutorial reframes the query as “what is the best way to… for a healthy scalp.” Submitting the question generates a notebook and immediately returns a snapshot of top papers (eight in the shown case; fewer when not on Pro). Results appear in a table with columns for papers, AI-generated abstract summaries, and references that link directly to the underlying studies. This table can then be refined using filters such as requiring PDFs, selecting publication year, choosing study type, and including or excluding papers based on abstract keywords.

Beyond ranking and filtering, the workflow supports deeper extraction and iteration. Users can add custom columns to pull out specific attributes from the papers—such as whether studies include human trials—then optionally enable “high accuracy mode,” which costs more credits but reduces extraction mistakes by about half. The table can also be filtered based on the new column values (e.g., showing only rows where human trials are present). A floating “add new step” bar lets users extend the same notebook by asking follow-up questions, finding more papers, extracting data, or generating a list of concepts, effectively building a step-by-step literature review ledger.

When users already have PDFs, the “extract data from PDFs” notebook view becomes a second major capability. Users can upload papers via drag-and-drop or select from a library, including integrations such as Zotero. After selecting documents, Elicit can create a table structure again and then add a “chat with papers” step. In beta, this chat can be “chatty,” so prompting may require more steering. A key cost-saving lever is toggling “use full text”: keeping it off limits answers to abstracts, while turning it on enables cross-document questions grounded in full text—such as identifying a research gap across selected papers.

Finally, the “list of Concepts” feature targets coverage and breadth. Entering a topic like “treatment for hair loss” triggers a search across sources, deduplicates concepts, and returns a structured list (including counts like 211 concepts across 60 papers and 88 unique concepts). The concepts can be downloaded as a CSV for use in Excel, helping researchers map a field’s major themes early and avoid missing important sub-areas. The overall message is that Elicit’s notebook-based pipeline—question → papers → filtered tables → extracted attributes → concept mapping—compresses what used to require extensive reading into a more systematic, table-driven workflow.

Cornell Notes

Elicit AI organizes literature review work into “notebooks” built around research questions. A clear question (not just keywords) generates a table of top papers with AI-generated abstract summaries and direct references, which can be narrowed using filters like year, study type, and abstract keywords. Users can add custom extraction columns (e.g., whether papers include human trials) and optionally enable “high accuracy mode” to reduce mistakes, at the cost of more credits. For deeper analysis, uploaded PDFs can be used in a “chat with papers” step, with a “use full text” toggle to control cost and answer depth. The “list of Concepts” tool helps map a field’s breadth by returning deduplicated concepts and counts, downloadable as CSV.

How does Elicit AI turn a research question into a usable literature review starting point?

After login, Elicit centers work in notebooks. In the “find papers” workflow, users enter a clear, unambiguous research question (the tutorial contrasts vague terms with more specific phrasing like “best way to… for a healthy scalp”). Submitting the question auto-creates a notebook and returns a table of top papers (eight in the shown Pro example; fewer without Pro). Each row includes the paper plus an AI-generated abstract summary, and references link directly to the source. This table then becomes the base for filtering and further extraction.

What are the main ways to narrow down results once papers are listed?

Elicit provides table filters that can require a PDF, restrict publication year, select a study type, and include or exclude papers based on keywords found in abstracts. Sorting options include “most relevant,” “most recent,” and “least recent,” which helps bubble up papers that best match the research focus or prioritize recency.

How can users extract specific attributes (like human trials) from papers, and what does “high accuracy mode” change?

Users can add custom columns to the results table and describe what they want extracted. The tutorial demonstrates creating a column to determine whether studies contain human trials, with outputs such as “answer not found” or “not mentioned.” Enabling “high accuracy mode” increases credit usage but reduces extraction errors by about 50%, and the resulting column can be filtered (e.g., showing only “yes” for human trials).

What’s the difference between using abstracts only versus full text when chatting with PDFs?

In the “extract data from PDFs” workflow, Elicit can create a “chat with papers” step. A key control is “use full text.” Turning it off limits responses to abstracts, which can save credits when the answer is likely present in abstracts. Turning it on uses full text and supports more detailed cross-document questions, such as identifying a research gap across multiple papers.

How does the “list of Concepts” feature help researchers avoid missing parts of a field?

Instead of focusing on a single question, “list of Concepts” searches broadly for key concepts related to a topic, deduplicates them, and returns a structured concept list. The tutorial example for “treatment for hair loss” yields concept categories like hormone modifiers, surgical options, and minoxidil/roGain, along with counts (e.g., 211 concepts across 60 papers and 88 unique concepts). The concepts can be downloaded as a CSV for later use, helping researchers map the field early.

Review Questions

  1. When generating a literature review in Elicit, why does phrasing a clear research question matter more than using vague keywords?
  2. What practical steps can be taken after Elicit returns a paper table to refine results and extract targeted information?
  3. How would you decide whether to turn “use full text” on or off when chatting with uploaded PDFs?

Key Points

  1. 1

    Elicit AI organizes research work into notebooks that keep questions, paper tables, extracted attributes, and follow-up steps in one place.

  2. 2

    Clear, unambiguous research questions produce better paper results than vague keyword searches, and Elicit auto-creates a notebook for the query.

  3. 3

    Paper tables can be refined with filters for PDF availability, publication year, study type, and abstract keyword inclusion/exclusion.

  4. 4

    Custom extraction columns let users pull specific attributes from papers (e.g., whether studies include human trials), with optional “high accuracy mode” to reduce mistakes.

  5. 5

    When working from PDFs, “chat with papers” supports cross-document Q&A, and the “use full text” toggle controls depth versus credit cost.

  6. 6

    The “list of Concepts” tool provides breadth by returning deduplicated concepts with counts and supports CSV export for offline planning.

Highlights

Notebooks turn a literature review into a step-by-step ledger: question → papers → filtered table → extracted columns → follow-up steps.
Custom columns can extract structured attributes from papers, and “high accuracy mode” cuts extraction mistakes by about half at higher credit cost.
“Chat with papers” can answer detailed questions across multiple documents when “use full text” is enabled, but abstracts-only mode can save credits.
“List of Concepts” helps map a research field’s coverage early by returning deduplicated concepts and downloadable CSV output.

Mentioned