Get AI summaries of any video or article — Sign up free
Scoping Review Steps - Heather Colquhoun thumbnail

Scoping Review Steps - Heather Colquhoun

6 min read

Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Scoping reviews are for mapping concepts, evidence types, and research gaps; intervention effectiveness questions usually require a different review approach.

Briefing

Scoping reviews are most useful when the research question is about mapping a field—what concepts exist, what evidence types are available, and where gaps remain—not when the goal is to determine whether an intervention works. Heather Colquhoun, drawing on years of collective experience and methodological work, frames scoping reviews as a form of reconnaissance: a systematic, exploratory way to describe the boundaries of a topic and the shape of the literature. That distinction matters because many journals have grown wary of low-quality scoping reviews, often produced with unclear purposes, weak methods, or outputs that don’t match the intended use of the findings.

Colquhoun grounds the discussion in the “family of knowledge synthesis,” placing scoping reviews alongside systematic reviews, rapid reviews, evidence maps, realist reviews, and living reviews. Across these approaches, screening and extraction practices often share similarities, but the question type and outputs differ. For scoping reviews, the defining features are the mapping objective and the focus on breadth: identifying concepts, types of evidence, and research gaps within a defined area. She emphasizes that scoping reviews should not be treated as a substitute for clinical effectiveness reviews; once intervention effects become central, the work should be reconsidered as a systematic review (or another appropriate approach) rather than a scoping review.

She also addresses persistent “gray areas” between scoping and systematic reviews, including cases where researchers label a review as “systematic” even when it is not about clinical effectiveness (for example, examining how theory is used in trials). The practical takeaway: the label alone is less important than the underlying purpose—what the review is meant to accomplish and how the results will be used.

Why do scoping reviews? Common reasons include summarizing and disseminating research, identifying gaps, informing future research priorities, and mapping a body of literature relevant to multiple concepts. Colquhoun is skeptical of one frequent justification—doing a scoping review merely to decide whether a systematic review is feasible—because it can lead to wasted effort when the systematic review later turns out to be empty. She argues that scoping reviews are more defensible when they stand as an end product: a structured map of what exists and what is missing.

Methodologically, she highlights steps that improve rigor and credibility: publish or register a protocol (since Prospero may not accept scoping reviews, Open Science Framework is often used), search widely but with deliberate scope-limiting decisions, and frame the question using PCC (Population, Concept, Context) rather than forcing PICO. Purpose is treated as a design driver: without a clear “why,” teams can answer the question yet struggle to synthesize results into something actionable.

Colquhoun also points to recurring quality problems in published scoping reviews—low protocol reporting, inconsistent use of independent screening, incomplete extraction details, and missing flow diagrams (including PRISMA flow). She notes that rehabilitation-focused scoping reviews often use qualitative synthesis more frequently, but still show methodological gaps. A major improvement effort underway is the PRISMA extension for scoping reviews (20 essential reporting items plus optional items), supported by EQUATOR resources and tip sheets.

Overall, the core message is pragmatic: scoping reviews should be built around a mapping question, executed with transparent, protocol-driven rigor, and reported in a way that produces outputs aligned with their intended decision-making value.

Cornell Notes

Scoping reviews are systematic, exploratory knowledge syntheses designed to map a field—identifying concepts, evidence types, and research gaps—rather than to judge whether interventions are effective. Colquhoun emphasizes that the defining feature is the question and purpose: once intervention effects are the target, a systematic review is usually the better fit. She recommends publishing a protocol (often via Open Science Framework when Prospero won’t accept scoping reviews), using PCC to frame questions, and limiting scope using research-question logic rather than resource excuses alone. Purpose should guide outputs and synthesis methods, and reporting quality is improving through PRISMA for scoping reviews and related EQUATOR guidance. Published scoping reviews still commonly fall short on protocols, screening/extraction transparency, and PRISMA flow reporting.

What makes a scoping review different from a systematic review in practice?

The distinction hinges on the question and intended use. Scoping reviews are for mapping: describing what concepts exist, what types of evidence are available, and where gaps remain within a defined boundary. Colquhoun warns that when teams start focusing on intervention effects (clinical effectiveness), the work should be reconsidered as a systematic review or another appropriate approach rather than labeled as scoping. She also notes “gray areas” where reviews are labeled inconsistently—so the purpose and outputs matter more than the label.

Why is protocol publication (or registration) treated as a quality requirement for scoping reviews?

Colquhoun argues that a protocol forces clarity and reduces looseness. Publishing a protocol improves the review because it is written down in detail, iterated through edits, and reviewed by external scrutiny. She notes that Prospero may not accept scoping reviews, so many teams use Open Science Framework to publicly post protocols/abstracts. She also points out that journals have increasingly rejected scoping reviews of dubious quality, and stronger protocol practices are one way to improve acceptance.

How should scoping review questions be framed when PICO feels awkward?

She recommends PCC (Population, Concept, Context) as the more natural framing for scoping reviews. PCC separates the “population” from the broader “concept” (e.g., stroke recovery education, diabetes participation) and adds “context” qualifiers (e.g., chronic vs acute phase, time window, community vs country-specific settings). This approach helps teams define boundaries and avoid overly broad, unmanageable searches.

What’s the biggest design risk when scoping reviews don’t start with a clear purpose?

Teams can end up with a review that answers the descriptive question but produces results that are hard to use. Colquhoun describes a common failure mode: clear question and objectives lead to correct extraction, yet the synthesis and discussion stall because the team never pinned down why the mapping matters—how the findings will advance knowledge or inform service delivery. Purpose should shape outputs (e.g., what categories to build, how to group evidence) and therefore the synthesis strategy.

How can scope be limited without undermining the mapping goal?

Colquhoun says limiting scope inevitably means giving something up, so the decision should be tied to the research question—not only to available resources. Practical options include narrowing “gray literature” by selecting key databases/websites or targeted programs, using sampling strategies for reference list searching (e.g., test a small percentage and stop if nothing new appears), and reducing the number of objectives or the depth of extraction. She cautions that limiting by date or study design often doesn’t reduce retrieval much if the literature is already concentrated in the chosen window.

What quality problems show up repeatedly in published scoping reviews?

Across her discussion of studies assessing scoping reviews, recurring issues include low protocol reporting, inconsistent independent screening (often replaced by verification), incomplete extraction detail, and missing PRISMA flow diagrams. She also highlights that many scoping reviews do not report eligibility criteria clearly and that rehabilitation scoping reviews may use qualitative synthesis more often, but still show methodological reporting gaps.

Review Questions

  1. When does an intervention-focused question suggest switching from a scoping review to a systematic review?
  2. How does PCC framing help prevent scoping reviews from becoming unmanageably broad?
  3. What are three concrete reporting or methodological elements that commonly distinguish higher-quality scoping reviews from lower-quality ones?

Key Points

  1. 1

    Scoping reviews are for mapping concepts, evidence types, and research gaps; intervention effectiveness questions usually require a different review approach.

  2. 2

    A scoping review’s purpose must be explicit because it determines how results should be synthesized and what outputs will be useful.

  3. 3

    Protocol publication improves rigor and transparency; when Prospero won’t accept scoping reviews, Open Science Framework is commonly used to post protocols.

  4. 4

    Use PCC (Population, Concept, Context) to structure scoping review questions instead of forcing PICO.

  5. 5

    Scope-limiting decisions should be justified by the research question (e.g., targeted gray literature, sampling reference lists, reducing objectives or extraction depth).

  6. 6

    Independent screening and detailed extraction reporting are frequent weak points; PRISMA flow diagrams and eligibility criteria should be clearly reported.

  7. 7

    PRISMA for scoping reviews (with EQUATOR resources) provides a standardized reporting checklist to raise quality and reduce journal rejections.

Highlights

Scoping reviews should not be used to answer “does it work?” questions; once intervention effects drive the goal, a systematic review is typically the better fit.
Purpose is not a formality—without it, scoping reviews can produce correct descriptive results that still fail to yield actionable synthesis.
Protocol transparency is a major quality lever, especially as journals increasingly reject poorly reported scoping reviews.
PCC framing (Population, Concept, Context) is presented as the practical alternative to PICO for scoping review questions.
Common published weaknesses include missing PRISMA flow reporting, incomplete extraction details, and insufficient transparency about eligibility and screening decisions.

Topics

  • Scoping Review Methods
  • PRISMA Reporting
  • Protocol Registration
  • PCC Question Framing
  • Scope Limiting

Mentioned

  • Heather Colquhoun
  • David Moore
  • Andrea Tricco
  • Lisa O'Malley
  • Arksey O'Malley
  • Kelly O'Brien
  • Daniela Vac
  • Joanna Briggs
  • Elain
  • Jenny McSherry
  • Aditya
  • Murphy
  • Chris
  • PRISMA
  • PCC
  • PICO
  • EPOC
  • OSF
  • EQUATOR