Get AI summaries of any video or article — Sign up free
Synthesis Without Meta-analysis (SWiM) reporting guideline thumbnail

Synthesis Without Meta-analysis (SWiM) reporting guideline

5 min read

Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Meta-analysis is often missing even in reviews of randomized trials, so quantitative synthesis may still be necessary when inputs for meta-analysis aren’t available.

Briefing

Synthesis Without Meta-analysis (SWiM) reporting guidance targets a common problem in evidence reviews: many systematic reviews of quantitative intervention studies cannot—or should not—run a meta-analysis, yet still need a transparent, rigorous way to combine results. The core message is that “no meta-analysis” is not “no synthesis.” Instead, reviewers should choose an alternative method based on what standardized metrics the included studies actually report, then report the grouping decisions and results in a way that lets readers judge credibility.

The guidance starts with decision-making: meta-analysis is often assumed to be automatic, but it is frequently absent even when randomized controlled trials are involved—about half of such reviews still do not include a meta-analysis. Reasons for avoiding meta-analysis like “not knowing how” aren’t treated as acceptable; teams can seek expertise or use established methods. The more legitimate barrier is missing inputs: studies may provide effect estimates but not the additional numbers required for meta-analysis, or they may report outcomes in formats that don’t align with standard effect-size synthesis.

Rather than defaulting to narrative-only summaries, SWiM points reviewers to alternative quantitative synthesis approaches. The choice depends on the standardized metrics available across studies: - If studies report intervention effect estimates (e.g., risk ratios, odds ratios, standardized mean differences), reviewers can summarize effect sizes using descriptive statistics such as medians, ranges, and interquartile ranges. - If studies only provide the direction of effect (benefit vs harm vs no effect), reviewers can use vote counting based on direction of effect—counting all studies favoring benefit (not just statistically significant ones), because focusing only on significance can bias results. - If studies report p values (with compatible directional hypotheses), reviewers can combine p values, using one-sided p values or converting two-sided p values to one-sided where appropriate.

A major emphasis falls on heterogeneity and study grouping. When studies differ—often the case in public health and population health—reviewers should not treat “apples and oranges” as combinable, but they can still group “apples and oranges” as both fruit by finding a higher-level structure that is useful for decision makers. SWiM’s reporting begins with this: explain how studies were grouped, why that structure is rational, and how topic expertise informed the grouping.

The guidance then stresses transparency as a trust-building mechanism. If readers can’t see what was done—what methods were used, what metrics were synthesized, and which studies fed each synthesis component—confidence drops even when methods are sound. SWiM therefore includes nine reporting items, with practical expectations for data presentation. Tables should reflect the synthesis structure (not alphabetical author order), and reviewers may use familiar visual tools when feasible—such as effect-direction plots (with study-by-study triangles indicating positive, negative, or null outcomes), even when a classic meta-analytic forest plot isn’t appropriate.

Finally, SWiM requires reviewers to match language to the question each method answers. Meta-analysis targets average effect; alternative methods answer different questions—such as the distribution of effects, whether any evidence of effect exists, or whether at least one study shows evidence. Limitations should be acknowledged explicitly, including how changes from the protocol (like regrouping studies) can constrain what outcomes can be interpreted.

Cornell Notes

SWiM reporting guidance helps reviewers do quantitative synthesis when meta-analysis isn’t possible. The method choice depends on what standardized metrics the included studies report: effect sizes can be summarized descriptively, direction-of-effect data can be vote-counted (including non-significant benefit/harm), and p values can be combined using one-sided p values or converted equivalents. A central requirement is transparent study grouping to handle heterogeneity—grouping should be justified for decision makers, often with topic expertise. Reporting must clearly state the metric, synthesis method, and which studies contributed to each synthesis component, because unclear methods undermine trust. The limitations section should also reflect what these methods can and cannot answer (e.g., no average effect magnitude).

Why isn’t meta-analysis always available in systematic reviews of intervention studies?

Meta-analysis is often treated as the default, but it’s frequently absent even when randomized controlled trials are included—roughly half of such reviews still don’t perform a meta-analysis. A key practical barrier is missing inputs: studies may provide effect sizes but not the additional numbers needed for meta-analysis, or they may report outcomes in incompatible formats. In those cases, reviewers should use alternative synthesis methods rather than stopping at narrative-only summaries.

How does SWiM decide which alternative synthesis method to use?

SWiM starts by identifying standardized metrics available across included studies. Three main options are tied to what studies report: (1) effect sizes (e.g., risk ratios, odds ratios, standardized mean differences) → summarize effect sizes using descriptive statistics like median, range, and interquartile range; (2) direction of effect only → vote counting based on direction of effect; (3) p values → combine p values using one-sided p values or converting two-sided p values to one-sided when directional hypotheses align.

What’s the key rule for vote counting based on direction of effect?

Vote counting should classify studies as benefit or harm based on direction of effect, not on statistical significance alone. Studies that show benefit (even if not statistically significant) belong in the benefit category. The guidance warns that counting only statistically significant results can skew conclusions because some studies may lack power to reach significance.

How should reviewers handle heterogeneity and “apples and oranges” differences among studies?

SWiM treats heterogeneity as a reason to think carefully about grouping, not as a reason to abandon synthesis. The guidance uses the “apples and oranges are not the same thing” idea, but argues they can still be grouped at a higher level (both are fruit) if that structure is rational. Reviewers should group studies in a way that is useful for intended evidence users, considering study characteristics, design, and risk of bias, and ideally consulting topic experts.

What must be reported to maintain reader trust when meta-analysis isn’t done?

SWiM emphasizes that methods must be reported with the same clarity expected for meta-analysis. Reviewers should explicitly state the standardized metric, the synthesis method(s), the rationale for study grouping, and which studies contributed to each synthesis component. If readers can’t assess what was done, they can’t judge whether robust methods were applied, even if the underlying work was careful.

How do alternative synthesis methods change the question the review answers?

SWiM requires language that matches the method. Meta-analysis targets an average effect. Summarizing effect sizes targets the range and distribution of effects. Vote counting targets whether there is any evidence of an effect (not average magnitude). Combining p values targets whether there is evidence of an effect in at least one study.

Review Questions

  1. If included studies report only direction of effect, what SWiM method is appropriate and what classification rule should be used regarding statistical significance?
  2. What information must be provided so readers can evaluate credibility when meta-analysis is not performed?
  3. How does regrouping studies for synthesis (e.g., from “apples vs oranges” to a higher-level “fruit” grouping) affect what conclusions are defensible?

Key Points

  1. 1

    Meta-analysis is often missing even in reviews of randomized trials, so quantitative synthesis may still be necessary when inputs for meta-analysis aren’t available.

  2. 2

    Legitimate barriers include missing required data for meta-analysis; lack of know-how should be addressed by expertise rather than by abandoning synthesis.

  3. 3

    SWiM selects an alternative method based on standardized metrics available across studies: effect sizes, direction of effect, or p values.

  4. 4

    Study grouping should be justified for decision makers and should handle heterogeneity transparently, often with topic expertise.

  5. 5

    Vote counting based on direction of effect should include all studies favoring benefit/harm, not only statistically significant results.

  6. 6

    Combining p values requires compatible directional hypotheses and appropriate use of one-sided p values (or conversion from two-sided).

  7. 7

    Reporting must match the question each method answers and should explicitly acknowledge limitations, including changes from the protocol.

Highlights

SWiM reframes “no meta-analysis” as a cue to use structured alternative synthesis methods rather than narrative-only reporting.
The method choice is driven by what standardized metrics the included studies actually report—effect sizes, direction of effect, or p values.
Vote counting should not be restricted to statistically significant studies; doing so can bias results by excluding underpowered trials.
Transparent study grouping is central: heterogeneity is handled by grouping studies in a way that remains useful for evidence users.
Alternative methods answer different questions than meta-analysis—especially around average effect magnitude versus evidence of effect.

Topics

  • SWiM Reporting
  • Meta-analysis Alternatives
  • Heterogeneity Grouping
  • Vote Counting
  • Combining P Values

Mentioned

  • Vary
  • SWiM