Get AI summaries of any video or article — Sign up free
Elicit for Health Economics & Outcomes Research thumbnail

Elicit for Health Economics & Outcomes Research

Elicit·
5 min read

Based on Elicit's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

HEOR inputs can be accelerated by systematically searching for relevant studies and extracting standardized burden and outcome metrics across papers.

Briefing

Health economics and outcomes research often hinges on turning scattered clinical evidence into usable estimates of burden, costs, and intervention impact. A practical workflow demonstrated here uses Elicit to search for relevant studies, extract standardized metrics across papers, and sanity-check extracted numbers—aiming to speed up inputs needed for cost-benefit or ROI-style analyses in areas like drug pricing, payer decisions, and policy budget allocation.

The session frames HEOR as an impact-assessment problem: quantify how many people face a condition, how severe the outcomes are (including quality-of-life losses), and what medical interventions change in terms of both direct costs and downstream complications. The example query targets a concrete question—“the frequency of biopsies and complications among lung cancer patients in the US”—to illustrate how Elicit can approximate market need and clinical burden by aggregating evidence from multiple studies.

Elicit’s “Find papers” workflow starts by searching a database of public papers and selecting a small set (eight) of studies most relevant to the query. To keep the evidence aligned with a specific decision context, the workflow adds predefined columns such as Region (to focus on the US), Data set (to gauge sample sizes and data sources), and Methodology (to identify study designs). The results skew toward retrospective studies, with prospective studies treated as generally more compelling and small case studies treated more cautiously.

Next comes the extraction step: Elicit pulls quantitative details like the number of patients, the number of biopsies, and the frequency of complications. Each extracted figure is tied to supporting quotes from the paper, enabling quick verification. The walkthrough highlights how default extraction can occasionally mis-handle relationships between related quantities (for example, an average biopsies-per-patient figure that appears inconsistent with the total number of biopsies and the number of patients). High accuracy mode is then used as a targeted upgrade: it relies on more advanced (and more expensive) models, returns answers in a more structured format (often bullet points), and can better handle table-derived values and arithmetic consistency checks.

The example also shows how uncertainty is surfaced via confidence flags; when Elicit is not confident, users are encouraged to double-check the underlying text. Another key constraint is access to full text: if a paper is open access, Elicit can extract from the full text; otherwise, it relies on the abstract, which can omit critical details. For deeper extraction from non-open-access PDFs, the workflow points to an “extract data from PDFs” approach.

Finally, the session demonstrates how to scale beyond a handful of studies by creating custom columns—such as a “study type” classifier—and filtering papers based on methodology keywords (e.g., retrospective vs. prospective). The takeaway is a repeatable HEOR research pipeline: search systematically, extract standardized burden and procedure/complication metrics with traceable citations, validate with high accuracy mode when stakes are high, and export results (CSV) for downstream analysis.

Cornell Notes

The workflow demonstrates how Elicit can support health economics and outcomes research by aggregating evidence across multiple studies and extracting decision-ready metrics. Using a lung cancer example, it searches for US-focused papers, filters by region and methodology, and extracts quantitative values such as number of patients, number of biopsies, and complication frequencies. Extracted numbers come with quotes for verification, and confidence flags highlight where users should double-check. High accuracy mode improves reliability—especially for table-derived values and arithmetic consistency—at higher cost. The process also distinguishes between open-access papers (full-text extraction) and paywalled/non-open-access papers (abstract-only extraction), with an alternative PDF extraction route when full text is available.

How does the workflow turn scattered clinical literature into inputs for HEOR analyses?

It starts with a targeted literature search (“Find papers”) for a specific HEOR question, then extracts standardized quantitative fields across multiple studies. In the lung cancer example, it pulls counts and rates needed for burden and impact estimates—such as the number of patients, average biopsies per patient, total biopsies, and complication frequency—so those figures can feed cost-benefit or ROI-style calculations.

Why add predefined columns like Region, Data set, and Methodology before extracting numbers?

Those columns align evidence with the decision context and help assess study quality. Region filters to studies conducted in the US for a US-focused analysis. Data set provides a quick sense of scale and credibility (including examples with tens of thousands of patients). Methodology flags study design (often retrospective in the example), which matters because prospective studies are generally treated as more compelling than retrospective reviews or small case studies.

What role does verification play in the extraction workflow?

Verification is built in: each extracted value is linked to relevant quotes from the source paper. That lets users quickly spot inconsistencies—such as an average biopsies-per-patient figure that seems incompatible with the total biopsies and patient count—before trusting the numbers for downstream modeling.

When should high accuracy mode be used, and what benefits does it bring?

High accuracy mode is recommended when accuracy is critical and cost is less of a constraint, such as formal research projects at a pharma company or hospital. It uses more advanced models (higher credit cost), returns more structured answers (often bullet points), obeys instructions more reliably, and is especially helpful for extracting data from tables and for catching arithmetic or table-related errors.

How does Elicit handle uncertainty and potential extraction errors?

It marks low-confidence outputs with error/uncertainty indicators. The workflow suggests treating these as prompts to double-check the underlying text. It also notes that mismatches can occur when related quantities are extracted incorrectly, so users should validate consistency across extracted fields (e.g., total biopsies vs. average biopsies per patient vs. number of patients).

What changes when full text isn’t available?

If a paper is open access, Elicit can extract from the full text; otherwise, it relies on the abstract, which may omit key details. The workflow emphasizes this distinction and points to uploading PDFs for “extract data from PDFs” when full text is available, enabling extraction beyond what appears in the abstract.

Review Questions

  1. In the lung cancer example, what specific extracted quantities were used to estimate burden and complications, and how were they validated?
  2. What differences between default extraction and high accuracy mode affect reliability, especially for table-derived values?
  3. How do Region and Methodology filters change the quality and relevance of the evidence set for a HEOR question?

Key Points

  1. 1

    HEOR inputs can be accelerated by systematically searching for relevant studies and extracting standardized burden and outcome metrics across papers.

  2. 2

    Region, Data set, and Methodology filters help ensure the evidence matches the decision context (e.g., US-only) and supports quality assessment.

  3. 3

    Extraction outputs should be treated as provisional until verified via the linked quotes from the source paper.

  4. 4

    High accuracy mode improves reliability for instruction-following, structured outputs, table-derived values, and arithmetic consistency, at higher cost.

  5. 5

    Confidence flags and uncertainty indicators should trigger targeted re-checking of the underlying text before using figures in models.

  6. 6

    Full-text extraction is possible for open-access papers, while non-open-access papers typically limit extraction to abstracts unless PDFs are uploaded.

  7. 7

    Custom columns and keyword-based filtering enable scaling from a small set of studies to larger evidence pools while controlling for study design.

Highlights

Elicit’s extraction includes traceable quotes, making it practical to catch inconsistencies between related metrics like total biopsies, patient counts, and average biopsies per patient.
High accuracy mode is positioned as the go-to option when accuracy matters—particularly for table extraction and when arithmetic relationships need to hold.
The workflow emphasizes a key constraint: abstract-only extraction for non-open-access papers, with full-text extraction available only when open access or PDFs are provided.
Confidence flags act as a built-in quality control signal, nudging users to double-check uncertain numbers.

Topics

Mentioned