Get AI summaries of any video or article — Sign up free
An Introduction and Overview of Scoping Reviews - Assoc. Professor Zachary Munn thumbnail

An Introduction and Overview of Scoping Reviews - Assoc. Professor Zachary Munn

6 min read

Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Scoping reviews are rigorous evidence synthesis tools, but they must be chosen for the right purpose—mapping and concept clarification, not effectiveness or accuracy decisions.

Briefing

Scoping reviews are a legitimate, rigorous form of evidence synthesis—but they’re often misused. Zachary Munn argues that many teams label work as a scoping review to avoid the demands of systematic reviews or to capitalize on “methodological freedom,” producing studies that skip key steps, lack clear justification, and end up wasting effort without delivering decision-ready answers. The core message: scoping reviews are not a “lower form” of evidence synthesis; they’re the right tool only when the goal is to map what exists, clarify concepts, and identify research gaps—not when the goal is to determine effectiveness, accuracy, or prevalence with confidence.

Munn situates scoping reviews within the broader evidence synthesis landscape, which has expanded far beyond classic systematic reviews of randomized controlled trials. Qualitative syntheses, umbrella reviews, evidence gap maps, mixed-methods reviews, and other approaches have proliferated, creating confusion about which method fits which question. To reduce that mismatch, he points to the “Right Review” tool (previously “What review is right for you”), designed to help teams choose an approach aligned to their objective.

A formal definition for scoping reviews is then emphasized: they map the breadth of evidence on a topic (often across study designs and sources), and they can also clarify key concepts and definitions, identify characteristics related to a concept, and increasingly investigate methodological issues within a field. Munn traces the approach’s development from Arksey and O’Malley’s 2005 framework, through Levac et al.’s 2010 refinements, to the JBI scoping review methodology group and updated guidance released in 2014 and later updated in 2020. He stresses that scoping reviews still require transparent, rigorous methods and reporting—just with different endpoints than systematic reviews.

The “dark side” section catalogs common red flags. Teams may treat scoping reviews as simplified systematic reviews, omit justification for why the scoping approach is needed, or use the label to make a project sound more publishable. Another recurring problem is applying scoping review logic to questions that demand effectiveness or prevalence estimates—areas where systematic reviews are typically more appropriate. Munn also notes that scoping reviews can be complex despite being perceived as easier; some include hundreds of studies. A recent dentistry-focused methodological study is cited as evidence that many scoping reviews fail to specify or justify their rationale and are poorly reported.

The “light side” is the corrective framework: scoping reviews can be powerful when done for the right reasons—identifying types of evidence, clarifying concepts, examining how research is conducted, informing future research, and analyzing knowledge gaps. Munn contrasts scoping review aims with systematic review aims: systematic reviews are better suited for clinically meaningful recommendations and questions about whether something works, is accurate, or is feasible, while scoping reviews are better for mapping and gap identification.

Methodologically, he outlines practical guidance: consult stakeholders early; confirm the scoping review is the best fit; build a team (scoping reviews should not be done by one person); develop and ideally register a protocol (Prospéro doesn’t accept scoping review registration, so alternatives like Open Science Framework are suggested); use transparent searching and standardized data extraction; avoid critical appraisal in most cases; and present results using flexible formats like tables, bubble plots, word clouds, or maps. Reporting should follow the PRISMA extension for scoping reviews. The takeaway is a decision discipline: choose scoping reviews when mapping and clarification are the goal, and apply systematic review rigor when the goal is to answer “does it work?” or “how common is it?” with defensible certainty.

Cornell Notes

Scoping reviews are a type of evidence synthesis designed to map the breadth of evidence on a topic, clarify key concepts and definitions, and identify research gaps—often across study designs. Munn warns that scoping reviews are frequently misused as a shortcut to avoid systematic review rigor, leading to skipped steps, weak justification, and poor reporting. When used for the right purposes, scoping reviews can be highly valuable for policy makers, clinicians, researchers, and future study planning. Quality depends on rigorous, transparent methods: stakeholder consultation, a priori protocol development, explicit search and selection procedures, standardized data extraction, and reporting aligned with the PRISMA extension for scoping reviews. The method should be chosen based on the question—mapping and gap-finding favor scoping reviews, while effectiveness, accuracy, and prevalence favor systematic reviews.

Why does Munn call out “misuse” of scoping reviews, and what does that look like in practice?

He describes scoping reviews being used as if they were simplified systematic reviews—skipping steps, applying the label to what is essentially an old-style narrative or classic literature review, and sometimes doing so to make the work sound more publishable. Another pattern is treating scoping reviews as justified by “methodological freedom,” assuming there’s no need for guidance or rationale. He also flags the mismatch where teams use scoping review methods to make claims that require systematic review endpoints (e.g., effectiveness or prevalence estimates), without the rigor of risk-of-bias assessment, meta-analysis, or certainty-focused reporting.

What is the functional definition of a scoping review, and what tasks does it perform?

A scoping review maps the breadth of evidence on a topic or issue, sometimes regardless of study design or source. It can also clarify key concepts and definitions in the literature, and identify key characteristics or factors related to a concept—including methodological characteristics. Munn emphasizes that scoping reviews can be used to build evidence maps and to investigate how research is conducted, including methods-focused questions.

How should teams decide whether a scoping review or a systematic review is the better fit?

The decision hinges on the question’s purpose. Munn contrasts systematic reviews as the best approach when the goal is clinically meaningful recommendations or answers about whether something works, is accurate, is feasible, or is cost-effective. Scoping reviews fit when the goal is mapping what exists, clarifying concepts, examining how research is conducted, informing future research, and identifying knowledge gaps. He reinforces this with examples: effectiveness of hand sanitizer for absenteeism favors systematic review; identifying measurement tools for postnatal depression favors scoping review; and prevalence of malaria infection in South Asia favors systematic review.

What methodological expectations still apply to scoping reviews, even though they don’t aim for certainty like systematic reviews?

Scoping reviews still require rigorous, transparent methods and transparent reporting. Munn highlights the need for an a priori protocol, explicit eligibility criteria, auditable search and selection processes, and standardized data extraction forms. Critical appraisal or risk-of-bias assessment is generally not performed (except in rare scenarios) because scoping reviews are not trying to determine best evidence or make definitive claims about effectiveness or accuracy.

What does “good practice” look like from planning through reporting?

He recommends stakeholder consultation early and throughout the project, confirming the scoping review is the right method, and building a team (at least two reviewers for screening/selection and extraction). Teams should develop and register or publish a protocol via an appropriate venue (Prospéro doesn’t accept scoping review registration; Open Science Framework is suggested). After conducting the review, results should be reported using the PRISMA extension for scoping reviews, and dissemination should target relevant knowledge users.

What are common quality-control tools or standards Munn points to?

Key resources include the JBI scoping review methodology guidance and the JBI Manual for Evidence Synthesis (especially the scoping review chapter). For reporting, he points to the PRISMA extension for scoping reviews (published in Annals of Internal Medicine, led by Andrea Tricco). For choosing the right approach, he points to the Right Review tool (previously “What review is right for you”).

Review Questions

  1. Give two examples of questions where a scoping review is the better choice and explain why mapping/clarification is central to each.
  2. List the main methodological steps Munn says should be handled with rigor in scoping reviews (planning, searching, extraction, analysis, reporting).
  3. What kinds of scoping review “red flags” did Munn describe, and how would you detect them when screening a submitted manuscript?

Key Points

  1. 1

    Scoping reviews are rigorous evidence synthesis tools, but they must be chosen for the right purpose—mapping and concept clarification, not effectiveness or accuracy decisions.

  2. 2

    A frequent failure mode is treating scoping reviews as simplified systematic reviews by skipping steps or making claims that require risk-of-bias assessment and certainty-focused synthesis.

  3. 3

    Scoping reviews can map evidence across study designs, clarify definitions, identify key characteristics, and increasingly examine methodological research within a field.

  4. 4

    Quality depends on an a priori protocol, transparent search and selection, standardized data extraction, and reporting aligned with the PRISMA extension for scoping reviews.

  5. 5

    Critical appraisal and risk-of-bias assessment are generally not part of scoping reviews because the goal is not to determine what works or what is most accurate.

  6. 6

    Stakeholder consultation and end-user engagement should occur throughout the scoping review process, from early question shaping to dissemination.

  7. 7

    Scoping reviews require a team (not a solo effort) and should be supported by appropriate expertise, including information specialists and methodologists.

Highlights

Scoping reviews aren’t a “poor cousin” of systematic reviews; they sit alongside other evidence synthesis methods with different endpoints.
The right-review decision turns on purpose: mapping and gap-finding favor scoping reviews, while “does it work?” and similar certainty questions favor systematic reviews.
Common scoping review failures include missing justification, skipping steps, and poor reporting—often driven by the misconception that scoping reviews are easier.
Even without meta-analysis, scoping reviews still need rigorous, transparent searching, standardized extraction, and PRISMA-extension reporting.
Prospéro doesn’t accept scoping review registration, so teams should use alternatives like Open Science Framework to publish protocols.

Topics

  • Evidence Synthesis
  • Scoping Review Definition
  • When to Use Scoping Reviews
  • Scoping Review Methods
  • PRISMA Extension

Mentioned

  • Zachary Munn
  • Andrew Tricco
  • Danielle Pollock
  • Hanan Khalil
  • Arksey
  • O'Malley
  • Levac
  • Andrea Tricco
  • JBI
  • PRISMA
  • PCC
  • PICO