An Introduction and Overview of Scoping Reviews - Assoc. Professor Zachary Munn
Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Scoping reviews are rigorous evidence synthesis tools, but they must be chosen for the right purpose—mapping and concept clarification, not effectiveness or accuracy decisions.
Briefing
Scoping reviews are a legitimate, rigorous form of evidence synthesis—but they’re often misused. Zachary Munn argues that many teams label work as a scoping review to avoid the demands of systematic reviews or to capitalize on “methodological freedom,” producing studies that skip key steps, lack clear justification, and end up wasting effort without delivering decision-ready answers. The core message: scoping reviews are not a “lower form” of evidence synthesis; they’re the right tool only when the goal is to map what exists, clarify concepts, and identify research gaps—not when the goal is to determine effectiveness, accuracy, or prevalence with confidence.
Munn situates scoping reviews within the broader evidence synthesis landscape, which has expanded far beyond classic systematic reviews of randomized controlled trials. Qualitative syntheses, umbrella reviews, evidence gap maps, mixed-methods reviews, and other approaches have proliferated, creating confusion about which method fits which question. To reduce that mismatch, he points to the “Right Review” tool (previously “What review is right for you”), designed to help teams choose an approach aligned to their objective.
A formal definition for scoping reviews is then emphasized: they map the breadth of evidence on a topic (often across study designs and sources), and they can also clarify key concepts and definitions, identify characteristics related to a concept, and increasingly investigate methodological issues within a field. Munn traces the approach’s development from Arksey and O’Malley’s 2005 framework, through Levac et al.’s 2010 refinements, to the JBI scoping review methodology group and updated guidance released in 2014 and later updated in 2020. He stresses that scoping reviews still require transparent, rigorous methods and reporting—just with different endpoints than systematic reviews.
The “dark side” section catalogs common red flags. Teams may treat scoping reviews as simplified systematic reviews, omit justification for why the scoping approach is needed, or use the label to make a project sound more publishable. Another recurring problem is applying scoping review logic to questions that demand effectiveness or prevalence estimates—areas where systematic reviews are typically more appropriate. Munn also notes that scoping reviews can be complex despite being perceived as easier; some include hundreds of studies. A recent dentistry-focused methodological study is cited as evidence that many scoping reviews fail to specify or justify their rationale and are poorly reported.
The “light side” is the corrective framework: scoping reviews can be powerful when done for the right reasons—identifying types of evidence, clarifying concepts, examining how research is conducted, informing future research, and analyzing knowledge gaps. Munn contrasts scoping review aims with systematic review aims: systematic reviews are better suited for clinically meaningful recommendations and questions about whether something works, is accurate, or is feasible, while scoping reviews are better for mapping and gap identification.
Methodologically, he outlines practical guidance: consult stakeholders early; confirm the scoping review is the best fit; build a team (scoping reviews should not be done by one person); develop and ideally register a protocol (Prospéro doesn’t accept scoping review registration, so alternatives like Open Science Framework are suggested); use transparent searching and standardized data extraction; avoid critical appraisal in most cases; and present results using flexible formats like tables, bubble plots, word clouds, or maps. Reporting should follow the PRISMA extension for scoping reviews. The takeaway is a decision discipline: choose scoping reviews when mapping and clarification are the goal, and apply systematic review rigor when the goal is to answer “does it work?” or “how common is it?” with defensible certainty.
Cornell Notes
Scoping reviews are a type of evidence synthesis designed to map the breadth of evidence on a topic, clarify key concepts and definitions, and identify research gaps—often across study designs. Munn warns that scoping reviews are frequently misused as a shortcut to avoid systematic review rigor, leading to skipped steps, weak justification, and poor reporting. When used for the right purposes, scoping reviews can be highly valuable for policy makers, clinicians, researchers, and future study planning. Quality depends on rigorous, transparent methods: stakeholder consultation, a priori protocol development, explicit search and selection procedures, standardized data extraction, and reporting aligned with the PRISMA extension for scoping reviews. The method should be chosen based on the question—mapping and gap-finding favor scoping reviews, while effectiveness, accuracy, and prevalence favor systematic reviews.
Why does Munn call out “misuse” of scoping reviews, and what does that look like in practice?
What is the functional definition of a scoping review, and what tasks does it perform?
How should teams decide whether a scoping review or a systematic review is the better fit?
What methodological expectations still apply to scoping reviews, even though they don’t aim for certainty like systematic reviews?
What does “good practice” look like from planning through reporting?
What are common quality-control tools or standards Munn points to?
Review Questions
- Give two examples of questions where a scoping review is the better choice and explain why mapping/clarification is central to each.
- List the main methodological steps Munn says should be handled with rigor in scoping reviews (planning, searching, extraction, analysis, reporting).
- What kinds of scoping review “red flags” did Munn describe, and how would you detect them when screening a submitted manuscript?
Key Points
- 1
Scoping reviews are rigorous evidence synthesis tools, but they must be chosen for the right purpose—mapping and concept clarification, not effectiveness or accuracy decisions.
- 2
A frequent failure mode is treating scoping reviews as simplified systematic reviews by skipping steps or making claims that require risk-of-bias assessment and certainty-focused synthesis.
- 3
Scoping reviews can map evidence across study designs, clarify definitions, identify key characteristics, and increasingly examine methodological research within a field.
- 4
Quality depends on an a priori protocol, transparent search and selection, standardized data extraction, and reporting aligned with the PRISMA extension for scoping reviews.
- 5
Critical appraisal and risk-of-bias assessment are generally not part of scoping reviews because the goal is not to determine what works or what is most accurate.
- 6
Stakeholder consultation and end-user engagement should occur throughout the scoping review process, from early question shaping to dissemination.
- 7
Scoping reviews require a team (not a solo effort) and should be supported by appropriate expertise, including information specialists and methodologists.