Get AI summaries of any video or article — Sign up free
Chat with papers you find or upload thumbnail

Chat with papers you find or upload

Elicit·
5 min read

Based on Elicit's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Elicit Notebook chat lets users ask questions across a user-selected set of papers drawn from Elicit’s database, uploaded PDFs, or both.

Briefing

Elicit’s new Notebook features add a flexible “chat with papers” workflow that lets researchers ask questions across papers they either search for in Elicit’s database, upload as PDFs, or combine from both sources. The core value is control: users can select a specific set of papers, then switch into an interactive chat mode to compare findings, drill into details, and refine which papers matter—without losing the grounding that comes from working directly with real academic documents.

The workflow starts from a new Notebook screen where users can search for papers on a topic (for example, “long-term effects of cash transfers”) and simultaneously upload their own documents. As results load, users can select any subset of papers—either from a single search query, from the uploaded set, or from a mix of both. Once the desired papers are highlighted, adding a new step opens chat mode tied to exactly that selection.

In chat mode, users can pose tasks such as “compare and contrast the papers.” Elicit highlights the four selected papers involved in the question, and users can choose whether to rely on full text when available or fall back to abstracts. That choice affects speed and cost: using abstracts is faster and cheaper, while full text supports deeper answers.

A key differentiator is that this chat experience sits alongside Elicit’s structured capabilities. Users can incorporate other Elicit actions—such as extracting main findings into columns—then use that structured output to decide which papers to chat with next. The interface supports an iterative loop: structured analysis helps narrow the set, and open-ended chat helps interpret and interrogate it.

The system also provides transparency and traceability. Clicking a paper reveals its full text and the tables referenced in the paper, and users can extract data from those tables when high-accuracy mode is enabled (Elicit Plus). Chat supports follow-up questions on the same paper set, including requests for specific study details like sample sizes. If the direction changes, users can select a new set of papers and start another chat step.

Under the hood, Elicit’s grounding is a major selling point. Because the tool operates over a database of public papers and can also work with uploaded PDFs, it avoids fabricating nonexistent citations—unlike some chat-first AI tools that may hallucinate references. Notebooks extend beyond chat as well: users can summarize selected papers, run additional queries to fetch more relevant literature, and then keep building the notebook in a tabular format when that better matches research needs.

Overall, the Notebook chat workflow is positioned as a practical bridge between exploratory questioning and structured literature analysis—especially useful when researchers don’t yet know what they’re looking for and need to scope criteria, evidence types, and study characteristics as they go.

Cornell Notes

Elicit’s Notebook features enable “chat with papers” across a user-controlled set of documents drawn from Elicit’s public-paper database, uploaded PDFs, or both. Researchers select specific papers, then ask questions like comparing and contrasting findings, with options to use full text or abstracts for faster, cheaper responses. The chat experience integrates with Elicit’s structured workflows, including extracting main findings into columns and using those outputs to decide which papers to interrogate next. Follow-up questions (e.g., sample sizes) can be asked repeatedly for the same set, and users can switch to new paper sets as their research focus evolves. Grounding in real papers helps prevent hallucinated citations.

How does a user control which papers are included in a chat?

From the Notebook screen, users can search Elicit’s database and/or upload PDFs (and import from Zotero). After results load, they select the papers they want—either from one search query, from the uploaded set, or from a combination. Clicking “add a new step” then starts chat mode using only that selected set, with the involved papers highlighted for reference.

What’s the difference between using full text versus abstracts in chat?

When starting chat, users can choose whether Elicit uses full text (when available) or abstracts. Full text supports deeper answers but can be slower and costlier; abstracts make responses faster and cheaper. This choice lets researchers balance speed against depth depending on the stage of their literature review.

How does chat integrate with Elicit’s structured analysis tools?

Chat isn’t isolated. Users can run other Elicit actions alongside it—such as extracting main findings into columns—then use that structured output to decide which papers to chat with next. This creates an iterative workflow: structured exploration narrows the set, and open-ended chat helps interpret and compare what matters.

What kinds of follow-up questions are supported after the initial prompt?

After asking a question for a selected paper set, users can ask follow-ups on the same documents. The transcript highlights requests for study details such as sample sizes, and the conversation can continue with additional questions without restarting from scratch.

Why is grounding in real papers emphasized, and how is it achieved?

Elicit is described as operating over a database of public papers and also working with uploaded PDFs. That grounding is presented as a safeguard against hallucinated citations—unlike chat-based tools that may reference papers that don’t exist. The Notebook workflow inherits that “real paper” basis whether papers are searched or uploaded.

What additional capabilities beyond chat are available in notebooks?

Notebooks support more than chat. Users can summarize selected papers, select different sets of papers for new steps, run additional queries to retrieve more literature, and structure outputs in tabular form when that format better supports research planning and scoping.

Review Questions

  1. When would a researcher choose abstracts over full text in Elicit chat, and what tradeoffs are implied?
  2. Describe an iterative workflow that combines structured extraction (like main findings columns) with follow-up chat questions.
  3. How does selecting papers from search results versus uploaded PDFs change what the chat can answer?

Key Points

  1. 1

    Elicit Notebook chat lets users ask questions across a user-selected set of papers drawn from Elicit’s database, uploaded PDFs, or both.

  2. 2

    Users can start chat from a Notebook step after selecting papers, and the selected documents are explicitly tied to the question.

  3. 3

    Chat supports a choice between full text and abstracts, trading depth for speed and cost.

  4. 4

    Notebook workflows combine open-ended chat with structured Elicit actions like extracting main findings into columns.

  5. 5

    Follow-up questions (e.g., sample sizes) can be asked repeatedly for the same paper set, then replaced by chats over new selections.

  6. 6

    Grounding in real papers from the public database and uploaded documents is positioned as a defense against hallucinated citations.

  7. 7

    Notebooks also enable non-chat actions such as summarizing papers and running additional queries, with tabular structuring when useful.

Highlights

Chat mode is tied to an explicit paper selection, letting researchers compare and contrast a chosen set rather than relying on vague retrieval.
Switching between abstract-only and full-text modes changes speed and cost while keeping the conversation anchored to the selected documents.
The workflow blends structured extraction (columns of main findings) with open-ended questioning, supporting an iterative literature review loop.
Grounding in real papers—whether searched or uploaded—is presented as a way to avoid hallucinated references.
Follow-up questions like sample sizes can be layered onto the same paper set without restarting the process.

Topics

  • Notebook Chat
  • Paper Selection
  • Full Text vs Abstracts
  • Grounded Citations
  • Iterative Literature Review

Mentioned