Find papers | Search over 125MM academic papers in Elicit
Based on Elicit's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Elicit’s “Find papers” searches roughly 200 million papers using semantic search based on meaning, not keyword overlap.
Briefing
Elicit’s “Find papers” workflow lets researchers search across roughly 200 million academic papers using natural-language queries, then turn the most relevant results into question-specific summaries and structured data. Instead of relying on keyword matching or complex Boolean strings, the system uses semantic search to find papers with similar meaning—often pulling in relevant work even when the exact keywords don’t overlap. That matters because it speeds up early-stage literature discovery, when the hardest part is usually figuring out what to search for and how to narrow down millions of candidates.
A typical session starts on the Elicit homepage (elicit.com) and selects the “Find papers” workflow. Users can ask a descriptive question such as: “What are the long-term effects of invasive species in the Pacific Northwest?” Elicit then searches its database and returns a ranked list of the most relevant papers, using titles and abstracts as the primary signals. The interface shows quick, dynamically generated summaries of each paper’s abstract tailored to the user’s question, helping readers judge relevance without opening every result.
Results appear in a table where each row corresponds to a paper and includes metadata such as authors, journal, citation count, and links to the DOI and (when available) the PDF. Users can click through to view the paper and see the abstract alongside the DOI landing page. Elicit also supports manual sorting and filtering: papers can be ordered by recency or citation count, constrained to date ranges, and filtered to include only open-access PDFs when full text is needed for richer extraction.
To get beyond basic paper lists, Elicit’s “columns” feature adds structured fields extracted from the papers. Predefined columns—especially common in biomedicine—can be added with a click, and users can create custom columns by specifying exactly what information they want. For example, a user can request the invasive species studied in each paper; Elicit extracts the relevant entities from the abstract or full text (depending on access) and displays the results in the table. The system also provides traceability: users can click quotes to see where the extracted information came from and open the underlying paper to verify context.
When semantic relevance misses relationships users care about—or when a topic is narrow—Elicit supports citation network searching, including citation trail and citation chasing. After selecting a seed set of papers (for instance, those focused on crayfish), Elicit searches the citation graph for both references and future citations, surfacing additional related papers that connect through the scholarly network.
Cost and workflow tradeoffs are explicit. “Find papers” is described as the cheapest workflow in credits, but adding more columns and searching across more papers increases credit usage. For downstream work, users can export results to CSV and BibTeX for reference managers such as Zotero. The workflow is designed to scale from quick discovery to structured extraction, with higher-accuracy column options available later via subscription features and follow-up capabilities.
Cornell Notes
Elicit’s “Find papers” workflow searches about 200 million academic papers using natural-language semantic search, so users don’t need complex keyword queries. It ranks results by relevance (not date or citations by default) and generates question-specific summaries from titles and abstracts. Users can filter results by date range, open-access PDF availability, study type (including review, meta analysis, systematic review, randomized control trial, and longitudinal), and even by whether papers contain specific keywords. The workflow becomes more powerful with “columns,” which extract structured information from abstracts or full text (when available) and can be customized with detailed instructions. Citation trail searching then expands results by following references and future citations in the citation graph.
How does Elicit find relevant papers without relying on keyword overlap?
What does the results table provide, and how can users verify extracted information?
What are the main ways users can narrow or refine search results?
How do “columns” change the workflow from reading to structured research?
What is citation trail searching, and when is it useful?
How do credits and exports fit into the workflow?
Review Questions
- When would you prefer citation trail searching over semantic search alone?
- What filters would you use to focus on open-access full text and specific study types?
- How can custom columns help answer a research question more directly than reading abstracts one by one?
Key Points
- 1
Elicit’s “Find papers” searches roughly 200 million papers using semantic search based on meaning, not keyword overlap.
- 2
Default ranking emphasizes relevance using titles and abstracts, with quick, question-specific abstract summaries generated on the fly.
- 3
Users can refine results with sorting and filters for date range, open-access PDFs, study type, and keyword inclusion/exclusion.
- 4
The columns feature turns paper lists into structured datasets by extracting entities and attributes from abstracts or full text.
- 5
Custom columns let researchers specify exactly what information they need (e.g., invasive species studied), with traceable quote-level evidence.
- 6
Citation trail searching expands results by following both references and future citations in the citation graph.
- 7
Credit costs rise with more columns and broader searches, while exports to CSV and BibTeX support downstream workflows in tools like Zotero.