Synthesis Without Meta-analysis (SWiM) reporting guideline
Based on Evidence Synthesis Ireland's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Meta-analysis is often missing even in reviews of randomized trials, so quantitative synthesis may still be necessary when inputs for meta-analysis aren’t available.
Briefing
Synthesis Without Meta-analysis (SWiM) reporting guidance targets a common problem in evidence reviews: many systematic reviews of quantitative intervention studies cannot—or should not—run a meta-analysis, yet still need a transparent, rigorous way to combine results. The core message is that “no meta-analysis” is not “no synthesis.” Instead, reviewers should choose an alternative method based on what standardized metrics the included studies actually report, then report the grouping decisions and results in a way that lets readers judge credibility.
The guidance starts with decision-making: meta-analysis is often assumed to be automatic, but it is frequently absent even when randomized controlled trials are involved—about half of such reviews still do not include a meta-analysis. Reasons for avoiding meta-analysis like “not knowing how” aren’t treated as acceptable; teams can seek expertise or use established methods. The more legitimate barrier is missing inputs: studies may provide effect estimates but not the additional numbers required for meta-analysis, or they may report outcomes in formats that don’t align with standard effect-size synthesis.
Rather than defaulting to narrative-only summaries, SWiM points reviewers to alternative quantitative synthesis approaches. The choice depends on the standardized metrics available across studies: - If studies report intervention effect estimates (e.g., risk ratios, odds ratios, standardized mean differences), reviewers can summarize effect sizes using descriptive statistics such as medians, ranges, and interquartile ranges. - If studies only provide the direction of effect (benefit vs harm vs no effect), reviewers can use vote counting based on direction of effect—counting all studies favoring benefit (not just statistically significant ones), because focusing only on significance can bias results. - If studies report p values (with compatible directional hypotheses), reviewers can combine p values, using one-sided p values or converting two-sided p values to one-sided where appropriate.
A major emphasis falls on heterogeneity and study grouping. When studies differ—often the case in public health and population health—reviewers should not treat “apples and oranges” as combinable, but they can still group “apples and oranges” as both fruit by finding a higher-level structure that is useful for decision makers. SWiM’s reporting begins with this: explain how studies were grouped, why that structure is rational, and how topic expertise informed the grouping.
The guidance then stresses transparency as a trust-building mechanism. If readers can’t see what was done—what methods were used, what metrics were synthesized, and which studies fed each synthesis component—confidence drops even when methods are sound. SWiM therefore includes nine reporting items, with practical expectations for data presentation. Tables should reflect the synthesis structure (not alphabetical author order), and reviewers may use familiar visual tools when feasible—such as effect-direction plots (with study-by-study triangles indicating positive, negative, or null outcomes), even when a classic meta-analytic forest plot isn’t appropriate.
Finally, SWiM requires reviewers to match language to the question each method answers. Meta-analysis targets average effect; alternative methods answer different questions—such as the distribution of effects, whether any evidence of effect exists, or whether at least one study shows evidence. Limitations should be acknowledged explicitly, including how changes from the protocol (like regrouping studies) can constrain what outcomes can be interpreted.
Cornell Notes
SWiM reporting guidance helps reviewers do quantitative synthesis when meta-analysis isn’t possible. The method choice depends on what standardized metrics the included studies report: effect sizes can be summarized descriptively, direction-of-effect data can be vote-counted (including non-significant benefit/harm), and p values can be combined using one-sided p values or converted equivalents. A central requirement is transparent study grouping to handle heterogeneity—grouping should be justified for decision makers, often with topic expertise. Reporting must clearly state the metric, synthesis method, and which studies contributed to each synthesis component, because unclear methods undermine trust. The limitations section should also reflect what these methods can and cannot answer (e.g., no average effect magnitude).
Why isn’t meta-analysis always available in systematic reviews of intervention studies?
How does SWiM decide which alternative synthesis method to use?
What’s the key rule for vote counting based on direction of effect?
How should reviewers handle heterogeneity and “apples and oranges” differences among studies?
What must be reported to maintain reader trust when meta-analysis isn’t done?
How do alternative synthesis methods change the question the review answers?
Review Questions
- If included studies report only direction of effect, what SWiM method is appropriate and what classification rule should be used regarding statistical significance?
- What information must be provided so readers can evaluate credibility when meta-analysis is not performed?
- How does regrouping studies for synthesis (e.g., from “apples vs oranges” to a higher-level “fruit” grouping) affect what conclusions are defensible?
Key Points
- 1
Meta-analysis is often missing even in reviews of randomized trials, so quantitative synthesis may still be necessary when inputs for meta-analysis aren’t available.
- 2
Legitimate barriers include missing required data for meta-analysis; lack of know-how should be addressed by expertise rather than by abandoning synthesis.
- 3
SWiM selects an alternative method based on standardized metrics available across studies: effect sizes, direction of effect, or p values.
- 4
Study grouping should be justified for decision makers and should handle heterogeneity transparently, often with topic expertise.
- 5
Vote counting based on direction of effect should include all studies favoring benefit/harm, not only statistically significant results.
- 6
Combining p values requires compatible directional hypotheses and appropriate use of one-sided p values (or conversion from two-sided).
- 7
Reporting must match the question each method answers and should explicitly acknowledge limitations, including changes from the protocol.