Get AI summaries of any video or article — Sign up free
Use Elicit to find hundreds of relevant papers thumbnail

Use Elicit to find hundreds of relevant papers

Elicit·
5 min read

Based on Elicit's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use a single query with repeated “load more” to quickly accumulate a large candidate set, then delete clearly irrelevant papers and prioritize the rest.

Briefing

Elicit is positioned as a fast way to generate and manage an “insane number” of relevant academic papers—so researchers can avoid missing key studies when building a literature base. The core workflow starts with running a query, reviewing the top results, and repeatedly clicking “load more” to expand the list until the search reaches diminishing returns. In a live example using sperm-count research, a single query like “sperm count measures” quickly produces dozens of papers, and the user can keep extending the results list, delete off-topic items, then prioritize what remains.

The next lever is running multiple independent queries inside the same project. Instead of relying on one search term, Elicit supports re-framing the same topic with different filters embedded directly in the query—such as restricting to papers published after 2000 or switching from “sperm count measures” to “sperm density measurement.” Each query acts like a separate search “tab,” but all results can be analyzed together later. This matters because different phrasings and inclusion criteria surface different subsets of the literature; once one query slows down, the workflow encourages starting a new one rather than forcing more value out of the same search.

Citation searching adds a second, compounding pathway for discovery. After selecting a promising paper, Elicit can surface “like these” papers by traversing the citation graph in both directions: backward through references and forward through later citations. The ranking is tied to relevance to the original query, making it easier to expand from a small set of strong candidates into a broader set of related work. The same mechanism can be tuned for specific research needs. For example, if a user finds a meta-analysis and wants the underlying evidence, filters can be applied to pull out primary studies mentioned in that review—such as limiting to RCTs (randomized controlled trials). If the goal is to see what happened after a particular paper (e.g., whether it was challenged or extended), date filters can restrict results to later citations.

Once enough papers accumulate—72 in the example—the workflow shifts from discovery to organization. Elicit can combine papers from all queries into a single table, enabling systematic screening against inclusion and exclusion criteria. The transcript also highlights a strategy for managing the “research explosion”: generate many candidate search terms first (even while results are loading), then do a second pass to refine queries based on what actually appears. Every search attempt is saved in the notebook history, allowing researchers to return later and continue expanding.

Finally, the tool is framed as a supplement rather than a replacement for established search methods, especially for formal systematic reviews. The practical promise is coverage: Elicit helps surface papers traditional searches may miss, then supports filtering and citation-graph expansion to reach a more comprehensive literature set—without getting lost in the volume.

Cornell Notes

Elicit helps researchers build a large, comprehensive paper set by combining three tactics: repeated “load more” within a query, multiple independent queries with different filters, and citation-graph expansion. In the sperm-count example, one query yields dozens of papers, while additional queries (e.g., different measurement terms and post-2000 filters) add different subsets. Selecting a key paper enables “like these” searches that traverse both references and later citations, and filters can extract primary studies from reviews (e.g., RCTs) or focus on work after a specific publication year. After collecting many papers, Elicit can merge them into one table for systematic screening using inclusion/exclusion criteria. This matters because it reduces the risk of missing relevant literature while keeping the workflow manageable.

How does Elicit expand coverage from a single search term without losing control?

Start with one query (e.g., “sperm count measures”), review the top results, then repeatedly click “load more” to pull in a much larger set. As the list grows, delete items that don’t fit and prioritize the remaining papers. The transcript emphasizes that this can quickly reach dozens of papers from one query, and the process can continue until returns diminish.

Why run multiple independent queries instead of refining one query endlessly?

Different framings and filters surface different papers. The workflow treats each query as independent (like opening separate search tabs) while keeping results in one project. For example, switching from “sperm count measures” to “sperm density measurement” and adding a publication-date constraint (published after 2000) produces a different set of papers, which helps avoid blind spots.

What does citation searching add beyond keyword queries?

Citation searching expands through the citation graph. After selecting a relevant paper, “show more like these” can pull in papers that cite it later (forward citations) and papers it cites (references). This supports snowballing in both directions and can rank results by relevance to the original query, making it easier to find additional studies related to a strong starting point.

How can Elicit extract primary studies from a review or meta-analysis?

Pick a review/meta-analysis paper, then use filters in combination with “show more like these.” The transcript describes limiting to study type—such as keeping meta-analysis results and then filtering to RCTs—so the system surfaces the randomized controlled trials referenced inside that review.

How can researchers focus on what happened after a specific paper?

Use date filters tied to the selected paper’s publication year to restrict results to later citations. The example notes that if a paper was published in 2022, filtering to later dates and limiting to later citations helps identify subsequent discussion, extensions, or potential refutations.

What’s the workflow after collecting a large corpus of papers?

Combine all collected papers across queries into a single table, then screen systematically using inclusion/exclusion criteria. In the example, 72 papers are consolidated so the researcher can extract data, delete irrelevant items, and narrow down to the most relevant set. The transcript also suggests a two-pass strategy: generate many search terms first, then refine based on what results actually appear.

Review Questions

  1. When should a researcher switch from adding more results in one query to starting a new query with different filters?
  2. Describe how citation-graph traversal (forward and backward) can help find both primary studies and later work related to a key paper.
  3. What combination of steps would you use to pull RCTs from a meta-analysis and then narrow further by publication date?

Key Points

  1. 1

    Use a single query with repeated “load more” to quickly accumulate a large candidate set, then delete clearly irrelevant papers and prioritize the rest.

  2. 2

    Run multiple independent queries with different embedded filters (e.g., measurement wording and publication-date constraints) to surface different subsets of the literature.

  3. 3

    Select strong candidate papers and use “show more like these” to traverse references and later citations, expanding coverage beyond keywords.

  4. 4

    Combine citation searching with filters to extract primary studies from reviews/meta-analyses, such as limiting to RCTs.

  5. 5

    Use date-based filters on later citations to focus on work published after a specific paper and assess how it was followed up or challenged.

  6. 6

    Merge papers from all queries into one table to apply inclusion/exclusion criteria systematically and manage screening at scale.

  7. 7

    Treat Elicit as a supplement to established search methods for systematic reviews, aiming for greater comprehensiveness rather than full replacement.

Highlights

A single keyword query can generate dozens of papers quickly by repeatedly clicking “load more,” then cleaning and prioritizing the list.
Each new query is independent (like separate search tabs) but results can be analyzed together in one project, improving coverage.
Citation searching can snowball both backward through references and forward through later citations, ranked by relevance.
Filters can turn a review/meta-analysis into a pathway for extracting underlying primary studies, such as RCTs.
After collecting many papers, consolidating them into one table enables systematic screening using inclusion and exclusion criteria.

Topics

Mentioned