Get AI summaries of any video or article — Sign up free
New Elicit Beta for Systematic Review and Meta-Analysis || Ai Literature Review || Hindi || 2023 thumbnail

New Elicit Beta for Systematic Review and Meta-Analysis || Ai Literature Review || Hindi || 2023

eSupport for Research·
5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Elicit’s beta is designed to accelerate systematic review and meta-analysis by extracting structured data from uploaded PDFs.

Briefing

Elicit’s newly released beta is positioned as a faster, more systematic way to run literature reviews and meta-analysis workflows—especially when the goal is to extract structured data from many PDFs and then turn those extracts into usable summaries. The core pitch is straightforward: upload up to 100 papers (PDFs), let the system generate structured outputs, and then download results in formats that fit downstream analysis—reducing the manual grind of reading, extracting, and reformatting findings across studies.

In the walkthrough, the beta is accessed through a dedicated website (noted as “elicit…dot…in”), with login handled via Google sign-in or an email-based account. Once signed in, the interface presents three main actions, with the most emphasized being PDF-driven extraction. The demo shows uploading a small batch first (five papers), then scaling the same workflow up to larger sets. After upload, the system produces a structured “final answer” view tied to the uploaded documents. From there, users can select what to include—such as intervention and outcome measures—and remove fields that don’t apply to their review. The user can also adjust the level of detail in the generated summaries (including options described as “more” or “full” output), and verify that the extracted fields align with what’s actually in the papers.

A key practical feature is the ability to download the extraction results. The walkthrough references downloading outputs in spreadsheet-friendly formats (including CSV and Excel-style options), and then “organizing” the data into tables for analysis. The emphasis is on keeping the workflow consistent: instead of reading papers one by one and manually copying results into a spreadsheet, the platform aims to produce a structured dataset directly from the PDFs, ready for synthesis.

Beyond extraction, the beta adds concept-level support through a “Discover” function that generates research concepts across papers. The demo uses a health-related example around sleep disorders and cardiovascular conditions, where a user supplies a broad concept prompt and the system generates related sub-concepts (e.g., links involving inflammation, immune function, and other indirect pathways). The output includes relevance signals and hyperlinks back to source material, with a noted behavior of removing duplicates so the evidence list stays cleaner. The concept view is framed as a way to check whether an idea has scientific grounding and to surface candidate papers for deeper exploration.

The workflow also includes query-based research assistance: users can run targeted searches (e.g., focusing on a specific signal or classification), then review “summary of top” answers and add them into the extraction workflow. The demo concludes with a comparative note—encouraging users to test the beta against other tools they already use, since different systems have different strengths—while stressing that the beta’s structured extraction and concept generation can help improve the speed and quality of literature review work.

Cornell Notes

Elicit’s new beta streamlines systematic reviews by turning uploaded PDFs into structured, downloadable outputs. Users can upload up to 100 papers, select which fields matter (like intervention and outcome measures), remove irrelevant fields, and adjust how detailed the generated summaries should be. The beta also supports concept generation across papers, helping researchers test whether a broad idea (for example, links between sleep disorders and heart conditions) has scientific relevance and pointing to related sources. For targeted follow-ups, users can run query-style searches and incorporate the resulting summaries into their review workflow. This matters because it reduces manual extraction time and produces analysis-ready datasets faster.

How does the beta handle large-scale PDF extraction for systematic review workflows?

It supports uploading many PDFs at once—up to 100 papers in the described workflow. After upload, it generates structured outputs (“final answer” style) that can be reviewed and adjusted. Users can select which extracted fields to keep (e.g., intervention and outcome measures) and remove fields that don’t apply. The demo also shows a verification step: checking that the extracted information matches what’s in the papers before downloading results.

What does “structured output” mean in practice, and how do users tailor it?

Structured output appears as a set of extracted fields that can be edited through selection controls. In the walkthrough, the user removes intervention when it’s not applicable, and also removes other fields like duration when they aren’t needed. There’s also a control for output depth (described as generating “more/full” details), letting the user trade brevity for completeness depending on the review stage.

Why is downloading the results emphasized, and what formats are mentioned?

Downloading is treated as the bridge from extraction to analysis. The demo references exporting results into spreadsheet-friendly formats, including CSV and Excel-style downloads. After export, the user can “organize” the data into tables for synthesis, rather than manually copying findings from individual PDFs.

How does the “Discover concept across papers” feature work, and what’s the benefit?

Users provide a broad concept prompt, and the system generates related sub-concepts and evidence links derived from papers. The example prompt connects sleep disorders to heart conditions, and the output includes indirect pathways such as inflammation and immune function. The benefit is rapid idea validation: researchers can check scientific relevance and then click through to source material for deeper reading.

What role do targeted queries play after concept generation?

After generating or refining a concept, users can run more specific queries (described as “similar query” and classification-focused searches). The system returns summaries of top answers and a final answer view, which can then be added into the extraction workflow. This supports an iterative process: concept → targeted evidence → structured extraction.

What caution is raised about comparing tools?

The walkthrough encourages users to compare the beta with other tools they already use, because each system has a different “flavor” and may perform better or worse on certain tasks. The demo also notes that extraction quality may vary on first use, so users should test and adjust rather than assume every output will be perfect immediately.

Review Questions

  1. When extracting from PDFs, what steps allow a reviewer to keep only relevant fields and remove non-applicable ones?
  2. How does concept generation differ from PDF extraction in the beta’s workflow?
  3. What iterative loop does the demo suggest between concept prompts, targeted queries, and adding results into the extraction dataset?

Key Points

  1. 1

    Elicit’s beta is designed to accelerate systematic review and meta-analysis by extracting structured data from uploaded PDFs.

  2. 2

    The platform supports uploading up to 100 papers and then generating structured outputs that can be reviewed and refined.

  3. 3

    Users can tailor extraction by selecting relevant fields (like intervention and outcomes) and removing fields that don’t apply to their review question.

  4. 4

    Generated results can be downloaded in spreadsheet-friendly formats (including CSV and Excel-style options) for downstream analysis.

  5. 5

    A “Discover” feature generates research concepts across papers and provides relevance-linked sources, helping validate whether an idea has scientific grounding.

  6. 6

    Query-based searches allow more targeted evidence retrieval, with summaries that can be added back into the extraction workflow.

  7. 7

    Comparing extraction quality against other tools is recommended because different systems may perform differently on the same task.

Highlights

Uploading up to 100 PDFs and generating structured, review-ready outputs is the beta’s central workflow promise.
Extraction can be customized field-by-field—irrelevant elements like intervention or duration can be removed before export.
Concept generation helps test whether a broad hypothesis (e.g., sleep disorders and heart conditions) is supported by related evidence across papers.
The beta emphasizes export to CSV/Excel-style formats so extracted findings become analysis-ready datasets quickly.

Topics