How to extract, analyze and present data in scoping reviews
Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Choose scoping reviews for mapping and characterization goals (concepts, definitions, study characteristics, and knowledge gaps), not for outcome-focused practice decisions that require certainty about effects.
Briefing
Scoping reviews are a rigorous way to systematically map the breadth of evidence on a topic—often across primary studies, reviews, and gray literature—so teams can clarify concepts, identify what evidence exists, and surface gaps before committing to more outcome-focused synthesis. The core practical takeaway is that scoping reviews should be chosen for the right purpose: when the goal is to understand feasibility, appropriateness, meaningfulness, or effectiveness for practice decisions, a systematic review is usually the better fit; when the goal is to characterize concepts, definitions, study characteristics, or knowledge gaps, scoping review methods are the appropriate tool.
A key distinction from systematic reviews is the handling of critical appraisal and risk of bias. In most scoping reviews, critical appraisal is not required because the aim is not to support clinical practice change; the evidence is being mapped rather than used to judge certainty for decision-making. The guidance emphasized that scoping reviews can still take years—protocol development, broad searching across multiple databases (and often gray literature), and heavy screening are time-intensive. What changes is what happens after studies are selected: extraction, analysis, and presentation are designed to summarize and organize the available literature rather than to compute pooled effects.
The process is framed around JBI’s nine-step approach, with the first three steps focused on building a protocol as a “recipe card” for transparent, pre-specified decisions. From there, extraction becomes the central bottleneck. With hundreds of included articles, teams commonly experience “extraction marathons,” including periods of excitement after screening and later fatigue when the volume becomes overwhelming. The guidance stresses that these reactions are normal and that strong teamwork—plus careful planning—reduces errors.
Extraction should be limited to what is relevant to the review objectives. Over-extracting is a common failure mode, especially when teams try to capture everything “just in case.” Piloting extraction forms is treated as essential: teams should test how long extraction takes, whether the form misses important items, and whether the form collects too much irrelevant detail. At least two reviewers are recommended for extraction with consensus (or, if not feasible, a checking reviewer), because human error and inconsistent data entry can distort the dataset.
Where scoping reviews differ most from systematic reviews is in what gets extracted. Instead of extracting effect results for meta-analysis, scoping reviews often extract study characteristics and reporting details from across the paper—title, keywords, abstract, introduction, discussion, and limitations—plus outcomes and measurement tools when those are needed to map the literature. If the research question truly requires pooled effects, the question should be reframed toward a systematic review.
For analysis, scoping reviews rely primarily on descriptive statistics (frequencies and percentages) and, when needed, basic qualitative content analysis to categorize barriers, enablers, or other conceptual themes. More interpretive approaches like thematic synthesis or meta-ethnography are flagged as typically beyond scoping review scope. The guidance also highlights inductive versus deductive analysis: deductive work extracts into an existing framework (e.g., a reporting checklist), while inductive work builds categories through open coding and immersion in the data.
Finally, presentation should go beyond tables to visualization that helps readers interpret large evidence maps quickly—pie charts, bubble charts, tree maps, and other graphics. Dissemination matters too: plain-language summaries and infographics (including decision trees and “big picture” review comparisons) can make scoping review outputs more usable. Reporting should follow both conduct guidance and PRISMA-ScR reporting standards, and researchers are encouraged to connect through the scoping review network for ongoing support and community learning.
Cornell Notes
Scoping reviews systematically map the breadth of evidence on a topic—across primary research, reviews, and gray literature—to clarify concepts, identify study characteristics, and reveal knowledge gaps. They are chosen when the goal is characterization and mapping rather than answering outcome-focused questions that require clinical certainty; that’s why critical appraisal and risk-of-bias tools are usually not used. JBI’s nine-step approach emphasizes a detailed protocol, then broad searching and heavy screening, followed by extraction, descriptive analysis, and transparent presentation. Extraction is the main workload: teams should pilot forms, extract only what aligns with objectives, use at least two reviewers (or checking), and extract study characteristics/reporting details rather than effect sizes. Analysis typically uses descriptive statistics and, when needed, basic qualitative content analysis with careful inductive or deductive coding.
How do scoping reviews differ from systematic reviews in purpose and what that means for critical appraisal?
Why can scoping reviews still take years, and where does the time shift compared with systematic reviews?
What does “extract only what is relevant” mean in practice?
What kinds of information are typically extracted in scoping reviews if not effect results?
What analysis methods fit scoping review scope?
How do inductive and deductive approaches show up during extraction and analysis?
Review Questions
- When would a team choose a scoping review over a systematic review, and how should that choice affect whether risk-of-bias tools are used?
- What are the most common failure modes during scoping review extraction, and how do piloting and reviewer consensus mitigate them?
- If a scoping review question requires pooled effect estimates, what should be changed in the review framing?
Key Points
- 1
Choose scoping reviews for mapping and characterization goals (concepts, definitions, study characteristics, and knowledge gaps), not for outcome-focused practice decisions that require certainty about effects.
- 2
In most scoping reviews, critical appraisal and risk-of-bias assessment are not needed because the purpose is not to support clinical practice change.
- 3
Scoping reviews can still take years due to broad searching, gray literature inclusion, and large screening workloads; the time burden shifts toward extraction and data organization.
- 4
Extraction should be limited to items that directly support the protocol objectives; piloting the extraction form is essential to confirm relevance, completeness, and workload.
- 5
Use at least two reviewers for extraction with consensus (or a checking reviewer if staffing is limited) to reduce human error and inconsistent data entry.
- 6
Extract study characteristics and reporting details across sections of each paper; extract effect results only when the question truly requires meta-analysis, which typically signals a systematic review.
- 7
Use descriptive statistics and, when needed, basic qualitative content analysis; reserve deeper interpretive methods (e.g., thematic synthesis) for systematic-review-level objectives.