Get AI summaries of any video or article — Sign up free
How to extract, analyze and present data in scoping reviews thumbnail

How to extract, analyze and present data in scoping reviews

6 min read

Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Choose scoping reviews for mapping and characterization goals (concepts, definitions, study characteristics, and knowledge gaps), not for outcome-focused practice decisions that require certainty about effects.

Briefing

Scoping reviews are a rigorous way to systematically map the breadth of evidence on a topic—often across primary studies, reviews, and gray literature—so teams can clarify concepts, identify what evidence exists, and surface gaps before committing to more outcome-focused synthesis. The core practical takeaway is that scoping reviews should be chosen for the right purpose: when the goal is to understand feasibility, appropriateness, meaningfulness, or effectiveness for practice decisions, a systematic review is usually the better fit; when the goal is to characterize concepts, definitions, study characteristics, or knowledge gaps, scoping review methods are the appropriate tool.

A key distinction from systematic reviews is the handling of critical appraisal and risk of bias. In most scoping reviews, critical appraisal is not required because the aim is not to support clinical practice change; the evidence is being mapped rather than used to judge certainty for decision-making. The guidance emphasized that scoping reviews can still take years—protocol development, broad searching across multiple databases (and often gray literature), and heavy screening are time-intensive. What changes is what happens after studies are selected: extraction, analysis, and presentation are designed to summarize and organize the available literature rather than to compute pooled effects.

The process is framed around JBI’s nine-step approach, with the first three steps focused on building a protocol as a “recipe card” for transparent, pre-specified decisions. From there, extraction becomes the central bottleneck. With hundreds of included articles, teams commonly experience “extraction marathons,” including periods of excitement after screening and later fatigue when the volume becomes overwhelming. The guidance stresses that these reactions are normal and that strong teamwork—plus careful planning—reduces errors.

Extraction should be limited to what is relevant to the review objectives. Over-extracting is a common failure mode, especially when teams try to capture everything “just in case.” Piloting extraction forms is treated as essential: teams should test how long extraction takes, whether the form misses important items, and whether the form collects too much irrelevant detail. At least two reviewers are recommended for extraction with consensus (or, if not feasible, a checking reviewer), because human error and inconsistent data entry can distort the dataset.

Where scoping reviews differ most from systematic reviews is in what gets extracted. Instead of extracting effect results for meta-analysis, scoping reviews often extract study characteristics and reporting details from across the paper—title, keywords, abstract, introduction, discussion, and limitations—plus outcomes and measurement tools when those are needed to map the literature. If the research question truly requires pooled effects, the question should be reframed toward a systematic review.

For analysis, scoping reviews rely primarily on descriptive statistics (frequencies and percentages) and, when needed, basic qualitative content analysis to categorize barriers, enablers, or other conceptual themes. More interpretive approaches like thematic synthesis or meta-ethnography are flagged as typically beyond scoping review scope. The guidance also highlights inductive versus deductive analysis: deductive work extracts into an existing framework (e.g., a reporting checklist), while inductive work builds categories through open coding and immersion in the data.

Finally, presentation should go beyond tables to visualization that helps readers interpret large evidence maps quickly—pie charts, bubble charts, tree maps, and other graphics. Dissemination matters too: plain-language summaries and infographics (including decision trees and “big picture” review comparisons) can make scoping review outputs more usable. Reporting should follow both conduct guidance and PRISMA-ScR reporting standards, and researchers are encouraged to connect through the scoping review network for ongoing support and community learning.

Cornell Notes

Scoping reviews systematically map the breadth of evidence on a topic—across primary research, reviews, and gray literature—to clarify concepts, identify study characteristics, and reveal knowledge gaps. They are chosen when the goal is characterization and mapping rather than answering outcome-focused questions that require clinical certainty; that’s why critical appraisal and risk-of-bias tools are usually not used. JBI’s nine-step approach emphasizes a detailed protocol, then broad searching and heavy screening, followed by extraction, descriptive analysis, and transparent presentation. Extraction is the main workload: teams should pilot forms, extract only what aligns with objectives, use at least two reviewers (or checking), and extract study characteristics/reporting details rather than effect sizes. Analysis typically uses descriptive statistics and, when needed, basic qualitative content analysis with careful inductive or deductive coding.

How do scoping reviews differ from systematic reviews in purpose and what that means for critical appraisal?

Scoping reviews map what exists—concepts, definitions, study characteristics, and evidence gaps—often across many evidence types. Systematic reviews aim to support practice decisions and outcome-focused conclusions. Because scoping review findings are not used to drive clinical practice change, critical appraisal and risk-of-bias assessment are generally unnecessary in scoping reviews; the mapping goal doesn’t require judging certainty of effects. If a scoping review is trying to evaluate quality to inform practice, it may be better aligned with a systematic review approach.

Why can scoping reviews still take years, and where does the time shift compared with systematic reviews?

Time intensity doesn’t disappear with scoping reviews. Broad questions require searching across multiple databases and often gray literature, plus screening far more records. The time shift happens after study selection: systematic reviews spend heavily on effect extraction for meta-analysis, while scoping reviews spend heavily on extracting relevant study characteristics/reporting details and then organizing and visualizing that large dataset.

What does “extract only what is relevant” mean in practice?

Extraction should be driven by the protocol objectives. Teams should avoid collecting every possible variable “just in case.” Piloting helps determine whether the extraction form is too broad or missing key items. If an item doesn’t help answer the review question, it should not be extracted. The guidance also warns about late-stage regret—if something important is discovered midstream, teams should discuss immediately whether the extraction sheet needs updating rather than finishing and then redoing hundreds of extractions.

What kinds of information are typically extracted in scoping reviews if not effect results?

Scoping reviews often extract study characteristics and reporting information from across the paper, not only the results section. That can include title, keywords, abstract, introduction, discussion, and limitations. Depending on the question, teams may extract outcomes and measurement tools, but they usually do not extract effect sizes for pooling. If the question truly requires meta-analysis of effects, the work should be reframed toward a systematic review question.

What analysis methods fit scoping review scope?

Analysis is usually descriptive: frequency counts and percentages that summarize how often certain characteristics appear. For qualitative material, the guidance recommends basic qualitative content analysis—coding into categories such as barriers or enablers—without moving into deeper interpretive methods. More advanced approaches like thematic synthesis or meta-ethnography are flagged as typically beyond scoping review objectives.

How do inductive and deductive approaches show up during extraction and analysis?

Deductive analysis starts with an existing framework and extracts directly into it, then checks whether it captures what’s needed. Inductive analysis begins without a fully formed framework: immersion in the evidence, open coding, and category development create the coding framework, after which extraction proceeds into the newly developed categories. The guidance also notes that extraction and analysis can be intertwined, especially when building categories from the data.

Review Questions

  1. When would a team choose a scoping review over a systematic review, and how should that choice affect whether risk-of-bias tools are used?
  2. What are the most common failure modes during scoping review extraction, and how do piloting and reviewer consensus mitigate them?
  3. If a scoping review question requires pooled effect estimates, what should be changed in the review framing?

Key Points

  1. 1

    Choose scoping reviews for mapping and characterization goals (concepts, definitions, study characteristics, and knowledge gaps), not for outcome-focused practice decisions that require certainty about effects.

  2. 2

    In most scoping reviews, critical appraisal and risk-of-bias assessment are not needed because the purpose is not to support clinical practice change.

  3. 3

    Scoping reviews can still take years due to broad searching, gray literature inclusion, and large screening workloads; the time burden shifts toward extraction and data organization.

  4. 4

    Extraction should be limited to items that directly support the protocol objectives; piloting the extraction form is essential to confirm relevance, completeness, and workload.

  5. 5

    Use at least two reviewers for extraction with consensus (or a checking reviewer if staffing is limited) to reduce human error and inconsistent data entry.

  6. 6

    Extract study characteristics and reporting details across sections of each paper; extract effect results only when the question truly requires meta-analysis, which typically signals a systematic review.

  7. 7

    Use descriptive statistics and, when needed, basic qualitative content analysis; reserve deeper interpretive methods (e.g., thematic synthesis) for systematic-review-level objectives.

Highlights

Scoping reviews map evidence breadth across primary studies, reviews, and gray literature, but they generally skip critical appraisal because the goal isn’t clinical practice change.
Extraction is the main marathon: with hundreds of included articles, teams must pilot forms, extract only what aligns with objectives, and use consensus or checking to prevent errors.
If the question requires pooled effect estimates, the work should be reframed toward a systematic review rather than forcing scoping review methods to do meta-analysis.
Visualization matters: scoping reviews can become “death by table,” so graphics like pie charts, bubble charts, and tree maps help readers interpret large evidence maps faster.
Dissemination is part of the job—plain-language summaries and infographics can make scoping review outputs more usable than a paper alone.

Topics

  • Scoping Review Definition
  • JBI Nine-Step Protocol
  • Data Extraction
  • Descriptive Analysis
  • Visualization and Reporting

Mentioned

  • JBI
  • PCC
  • RCTs
  • PRISMA-ScR