Get AI summaries of any video or article — Sign up free
Evidence and Gap Maps thumbnail

Evidence and Gap Maps

6 min read

Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Evidence and gap maps organize research in an intervention-by-outcome matrix and use interactive filters to narrow evidence by design, population, and region.

Briefing

Evidence and gap maps are interactive, matrix-style evidence inventories that make it possible to see—at a glance—what interventions have been studied against which outcomes, where the evidence is strong, and where research is missing. Their value lies in helping funders and researchers avoid blind spots: instead of only asking what works, these maps show how much is known, what quality that knowledge has, and which gaps should drive future systematic reviews or new impact evaluations.

At the core is a structured “framework” that defines the map’s scope. Interventions typically form the rows and outcomes the columns, producing a grid of cells that can be populated with evidence items. Many versions also add filters—such as study design, population subgroup, region/country, and sometimes funding agency—so users can narrow the view to exactly the evidence they need. Campbell-style maps go further by attaching quality ratings and interactive features: bubble colors indicate evidence type and quality, bubble size reflects the volume of studies in each intervention–outcome cell, and clicking a bubble can open user-friendly summaries plus links back to the original sources.

The presentation contrasted evidence and gap maps with systematic reviews. Systematic reviews are narrower and aim to synthesize results to determine effectiveness; evidence and gap maps are broader and aim to catalog and organize what evidence exists. Both rely on rigorous, protocol-driven searching and screening, often with dual screening and conflict resolution. The key difference is what gets extracted and how: maps extract less outcome-level detail than systematic reviews because they are not designed to produce an effectiveness synthesis. That makes them quicker while still systematic.

A major theme was how evidence mapping evolved. Early work in the early 2000s used structured “evidence mapping” methods, with later breakthroughs bringing visualization and interactivity. By 2010, the International Initiative for Impact Evaluation (3ie) introduced evidence gap maps with interactive online versions. Over time, many organizations adapted the approach, leading to more than a dozen (and now many more) map-producing groups across sectors.

The talk also highlighted concrete examples of how Campbell has innovated beyond effectiveness. “Map of maps” aggregates completed evidence gap maps across international development topics, identifying priority areas for commissioning new maps. Mega-maps focus on broad domains like child welfare using systematic reviews only, then flagging where evidence is thin. In homelessness, Campbell developed an effectiveness map that critically appraised included studies—using tools such as an AMSTAR-based checklist for systematic reviews and a modified risk-of-bias tool for primary studies—then used the results to commission process-evaluation maps aimed at understanding barriers and facilitators. A country-focused Uganda map (2002–2018) illustrates the scale possible: hundreds of process, impact, and formative evaluations were captured, enabling country-level synthesis discussions.

Ultimately, evidence and gap maps function as building blocks in an “evidence architecture,” supporting knowledge brokering, evidence portals, and guideline development. They help identify where high-quality evidence exists, where absolute gaps mean no studies are available, and where “empty reviews” suggest systematic reviews may be premature. They also support research and funding priorities by reducing duplication—showing what others have already studied or are actively studying—so new work targets the most consequential missing evidence.

Cornell Notes

Evidence and gap maps are interactive, matrix-based inventories of research that show which interventions have been studied for which outcomes, along with evidence type, quality, and volume. They use a pre-specified framework to set scope (often interventions as rows and outcomes as columns) and then apply systematic searching and screening similar to systematic reviews, but with less data extraction because the goal is mapping rather than effectiveness synthesis. Filters (e.g., study design, population, region, sometimes funder) let users quickly locate relevant evidence and identify “absolute gaps” where no studies exist. Campbell-style maps also provide bubble-level user summaries and links to original sources, making evidence more discoverable and usable. The maps matter because they guide research and funding priorities, reduce duplication, and support evidence architecture through commissioning and evidence portal development.

What does an evidence and gap map look like, and what do the cells and visual cues represent?

Most evidence and gap maps are built as a matrix: interventions typically sit in the rows and outcomes in the columns. Each cell can contain “bubbles” representing evidence items relevant to that intervention–outcome intersection. Visual cues carry meaning: bubble color often indicates evidence type and quality (for example, traffic-light style ratings), while bubble size reflects the volume of evidence in that cell. Many maps also add interactive filters (study design, region/country, population subgroup, and sometimes funding agency) so users can isolate subsets such as randomized controlled trials in a specific region.

How do evidence and gap maps differ from systematic reviews in purpose and workflow?

Systematic reviews are narrower in scope and aim to synthesize results to determine what is effective. Evidence and gap maps are broader and aim to catalog what evidence exists and where gaps are located. Search strategy and screening are still rigorous and protocol-driven, often using dual screening with conflict resolution. The main difference is coding and extraction: maps extract less data than systematic reviews because they do not produce an effectiveness synthesis. As a result, maps can be quicker while still systematic.

Why is building the framework (scope) the most critical step in creating a map?

The framework defines what the map includes and what it excludes, which directly determines the scope of the evidence inventory. It also drives downstream decisions: the search strategy, coding categories, reporting structure, and how users will interpret the results. The framework is ideally built with stakeholder input to ensure usability—either by adopting an existing consensus framework, adapting funder/project strategy documents, or running stakeholder consultations when no clear framework exists.

What kinds of gaps can maps reveal, and how can those gaps be used?

Maps can show areas with substantial evidence suitable for evidence synthesis (e.g., enough impact evaluations and systematic reviews in a given intervention–outcome area). They can also reveal “absolute gaps,” where no bubbles appear at all—signaling that implementation research or new impact evaluations may be needed. Maps can further identify “empty reviews,” where only systematic reviews exist but there are no impact evaluations, suggesting systematic reviews may not be appropriate until primary evidence is available. These gap patterns help set research and funding priorities and reduce duplication.

How did Campbell-style mapping expand beyond effectiveness research?

Campbell’s innovations include applying mapping to different evidence types and research questions. Examples include: (1) “map of maps” aggregating evidence gap maps across international development to identify commissioning priorities; (2) mega-maps on child welfare using systematic reviews only, then flagging priority areas for new maps; (3) homelessness effectiveness mapping with critical appraisal of included studies (AMSTAR for systematic reviews and a modified risk-of-bias tool for primary studies), followed by process-evaluation mapping focused on barriers and facilitators; and (4) country-focused mapping in Uganda capturing process, impact, and formative evaluations across a defined time window.

What practical features make evidence and gap maps usable for researchers and funders?

Beyond the matrix visualization, maps often provide interactive filters, bubble-level user summaries, and direct links to original sources. User summaries avoid reproducing copyrighted abstracts while still describing study findings, interventions, outcomes, and other key details such as funding information. This design lets users quickly locate relevant evidence, assess quality and volume, and then drill down to the underlying studies without leaving the map environment.

Review Questions

  1. What elements of a map’s framework determine its scope, and why does that scope affect both searching and coding?
  2. In what ways can an evidence and gap map help prevent duplication of research efforts, and what does it look like when evidence is missing?
  3. How does the extraction and coding approach in evidence and gap maps enable broader coverage compared with systematic reviews?

Key Points

  1. 1

    Evidence and gap maps organize research in an intervention-by-outcome matrix and use interactive filters to narrow evidence by design, population, and region.

  2. 2

    A pre-specified framework is the foundation of a map: it determines what gets included, how searches are run, and how results are coded and interpreted.

  3. 3

    Evidence and gap maps use systematic, protocol-driven searching and screening similar to systematic reviews, but extract less data because they are not built to synthesize effectiveness.

  4. 4

    Bubble color and size typically communicate evidence type/quality and evidence volume, while clickable bubbles can provide user summaries and links to original sources.

  5. 5

    Maps can identify “absolute gaps” (no studies) and “empty reviews” (systematic reviews without impact evaluations), which can guide commissioning and research priorities.

  6. 6

    Campbell’s mapping work extends beyond effectiveness into areas like process evaluation and country-level evidence inventories, enabling different kinds of evidence architecture decisions.

  7. 7

    Regular updates depend on resources; updating generally requires rerunning searches and recoding newly found studies in the mapping platform.

Highlights

Evidence and gap maps shift the question from “what works?” to “what evidence exists, for which interventions and outcomes, and with what quality?”
The framework step is decisive: it sets scope and therefore shapes the entire search, coding, and reporting pipeline.
Campbell’s homelessness work illustrates a progression from effectiveness mapping to process-evaluation mapping to understand barriers and facilitators.
A Uganda country map (2002–2018) identified hundreds of evaluations across process, impact, and formative categories, showing how mapping can support country-level synthesis discussions.

Topics

  • Evidence Gap Maps
  • Evidence Mapping
  • Systematic Mapping
  • Knowledge Brokering
  • Campbell Collaboration

Mentioned

  • Nikita Burke
  • Ashrita Sharon
  • CDC
  • ESI
  • 3ie
  • AMSTAR
  • EPI reviewer