How to Use Elicit AI, Literature Reviews + More: Beginner Tutorial and Research Tips!
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Elicit AI organizes research work into notebooks that keep questions, paper tables, extracted attributes, and follow-up steps in one place.
Briefing
Elicit AI is positioned as a fast, structured way to turn research questions into a navigable literature review—without spending hours manually hunting, reading, and extracting details from papers. After logging in, the workflow centers on “notebooks,” which act as containers for a research theme, keep questions and outputs organized in a sidebar, and can be auto-titled if the user skips naming. With a paid Pro plan, credits are higher, but most core features still work without Pro—just with fewer credits.
The most common starting point is using Elicit as a question-driven search engine. Instead of relying on keyword-only queries, the system works best with clear, unambiguous research questions. For example, rather than a vague term, the tutorial reframes the query as “what is the best way to… for a healthy scalp.” Submitting the question generates a notebook and immediately returns a snapshot of top papers (eight in the shown case; fewer when not on Pro). Results appear in a table with columns for papers, AI-generated abstract summaries, and references that link directly to the underlying studies. This table can then be refined using filters such as requiring PDFs, selecting publication year, choosing study type, and including or excluding papers based on abstract keywords.
Beyond ranking and filtering, the workflow supports deeper extraction and iteration. Users can add custom columns to pull out specific attributes from the papers—such as whether studies include human trials—then optionally enable “high accuracy mode,” which costs more credits but reduces extraction mistakes by about half. The table can also be filtered based on the new column values (e.g., showing only rows where human trials are present). A floating “add new step” bar lets users extend the same notebook by asking follow-up questions, finding more papers, extracting data, or generating a list of concepts, effectively building a step-by-step literature review ledger.
When users already have PDFs, the “extract data from PDFs” notebook view becomes a second major capability. Users can upload papers via drag-and-drop or select from a library, including integrations such as Zotero. After selecting documents, Elicit can create a table structure again and then add a “chat with papers” step. In beta, this chat can be “chatty,” so prompting may require more steering. A key cost-saving lever is toggling “use full text”: keeping it off limits answers to abstracts, while turning it on enables cross-document questions grounded in full text—such as identifying a research gap across selected papers.
Finally, the “list of Concepts” feature targets coverage and breadth. Entering a topic like “treatment for hair loss” triggers a search across sources, deduplicates concepts, and returns a structured list (including counts like 211 concepts across 60 papers and 88 unique concepts). The concepts can be downloaded as a CSV for use in Excel, helping researchers map a field’s major themes early and avoid missing important sub-areas. The overall message is that Elicit’s notebook-based pipeline—question → papers → filtered tables → extracted attributes → concept mapping—compresses what used to require extensive reading into a more systematic, table-driven workflow.
Cornell Notes
Elicit AI organizes literature review work into “notebooks” built around research questions. A clear question (not just keywords) generates a table of top papers with AI-generated abstract summaries and direct references, which can be narrowed using filters like year, study type, and abstract keywords. Users can add custom extraction columns (e.g., whether papers include human trials) and optionally enable “high accuracy mode” to reduce mistakes, at the cost of more credits. For deeper analysis, uploaded PDFs can be used in a “chat with papers” step, with a “use full text” toggle to control cost and answer depth. The “list of Concepts” tool helps map a field’s breadth by returning deduplicated concepts and counts, downloadable as CSV.
How does Elicit AI turn a research question into a usable literature review starting point?
What are the main ways to narrow down results once papers are listed?
How can users extract specific attributes (like human trials) from papers, and what does “high accuracy mode” change?
What’s the difference between using abstracts only versus full text when chatting with PDFs?
How does the “list of Concepts” feature help researchers avoid missing parts of a field?
Review Questions
- When generating a literature review in Elicit, why does phrasing a clear research question matter more than using vague keywords?
- What practical steps can be taken after Elicit returns a paper table to refine results and extract targeted information?
- How would you decide whether to turn “use full text” on or off when chatting with uploaded PDFs?
Key Points
- 1
Elicit AI organizes research work into notebooks that keep questions, paper tables, extracted attributes, and follow-up steps in one place.
- 2
Clear, unambiguous research questions produce better paper results than vague keyword searches, and Elicit auto-creates a notebook for the query.
- 3
Paper tables can be refined with filters for PDF availability, publication year, study type, and abstract keyword inclusion/exclusion.
- 4
Custom extraction columns let users pull specific attributes from papers (e.g., whether studies include human trials), with optional “high accuracy mode” to reduce mistakes.
- 5
When working from PDFs, “chat with papers” supports cross-document Q&A, and the “use full text” toggle controls depth versus credit cost.
- 6
The “list of Concepts” tool provides breadth by returning deduplicated concepts with counts and supports CSV export for offline planning.