Selective Reporting & Misrepresentation of Data | eSupport for Research | 2022 | Dr. Akash Bhoi
Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Selective reporting suppresses negative or undesirable findings, often leading to biased analysis or writing that undermines reproducibility.
Briefing
Selective reporting—often tied to outcome reporting bias and reporting bias—happens when researchers deliberately or carelessly present only part of the results from a study, leaving out negative or “undesirable” findings. The motivation is frequently to suppress outcomes that don’t fit expectations, but the downstream effect is serious: the published findings can become skewed by bias during analysis or writing, making the results hard or impossible to reproduce. When selective reporting enters the scientific record, reproducibility suffers because readers and other researchers are not seeing the full evidence base that the original study produced.
The discussion also links selective reporting to a broader ecosystem of biases that can distort research from start to finish. Design bias can begin at the planning stage if the research team uses a limited or non-representative population or dataset—such as relying on a narrow healthcare dataset rather than a sufficiently broad demographic base. Procedural bias can arise after the design is approved, when researchers steer the experiment through a predetermined path rather than following the intended procedure neutrally. Personal bias is described as especially difficult to avoid because it stems from the researcher’s character and may go unrecognized even by the people involved.
Once results are ready for dissemination, reporting bias can expand into selective evidence dissemination. The framing of what gets reported can lead to fragmented reporting, where only the “preferred” subset of analyses is emphasized. Selective publication is another pathway: work deemed “not suitable” for a journal may be pushed into conference proceedings, abstracts, or other outlets, creating publication bias when the weight of evidence across venues becomes uneven. The transcript also highlights inclusion bias during literature reviews, where authors may select literature or databases they are comfortable with, exclude recent findings that don’t align with existing work, or rely on non-existent, outdated, or poorly cited databases. These choices can prevent proper comparison, cross-validation, and a fair synthesis of prior evidence.
All of these biases—reporting, publication, and inclusion—can compound into dissemination bias, distorting what the scientific community ultimately learns. The final section shifts to misrepresentation of data, defined as communicating honestly collected data in a deceptive way. Misrepresentation can involve misleading interpretation, unfounded extrapolation beyond what the data support (for example, extending findings to populations not actually studied), and ignoring limitations that should constrain conclusions.
A practical way to spot misrepresentation is through comparison: when a study is framed by a proper protocol and statistical plan, the accurate article should include pre-specified methods, focus interpretation on primary analyses, highlight limitations, and avoid overreach. A distorted article, by contrast, may misreport methods, misreport results, and misinterpret findings—sometimes through negligence, and sometimes intentionally to achieve a desirable outcome or “beautify” the narrative. The transcript closes by emphasizing that readers can often detect non-reproducibility and that adherence to publication ethics guidance (including COPE-style expectations mentioned) helps reduce these problems, including redundant or duplicate publication practices.
Cornell Notes
Selective reporting and misrepresentation distort the scientific record by presenting only favorable or incomplete evidence. Selective reporting—closely linked to outcome reporting bias—can suppress negative findings, skew analysis or writing, and undermine reproducibility. Bias can enter at multiple stages: design bias from limited populations, procedural bias from predetermined experimental paths, and personal bias that may go unnoticed. During dissemination, selective publication and inclusion bias in literature reviews can further skew what gets compared and synthesized. Misrepresentation of data includes misleading interpretation, unfounded extrapolation, and ignoring limitations, often detectable by comparing pre-specified methods and primary analyses in an accurate article versus a distorted one.
What makes selective reporting unethical, and why does it threaten reproducibility?
How do design bias, procedural bias, and personal bias differ in where they enter a study?
What is selective publication, and how can it create publication bias?
How does inclusion bias affect literature reviews and evidence synthesis?
What counts as misrepresentation of data in this framework?
How can readers detect distorted reporting using the protocol-method-results-interpretation structure?
Review Questions
- Which stage of research is most associated with design bias, and what example of dataset limitation is given?
- How do selective publication and inclusion bias each distort what other researchers can learn from the literature?
- What specific elements of an accurate article (methods, primary analysis, limitations) are used as a checklist to spot misrepresentation?
Key Points
- 1
Selective reporting suppresses negative or undesirable findings, often leading to biased analysis or writing that undermines reproducibility.
- 2
Outcome reporting bias is treated as a common research ethics problem because incomplete reporting prevents others from verifying results.
- 3
Bias can originate early (design bias from limited or non-representative populations), during execution (procedural bias from predetermined experimental paths), or from researcher behavior (personal bias).
- 4
Selective publication can shift evidence into conferences or abstracts instead of journals, creating publication bias through uneven evidence distribution.
- 5
Inclusion bias in literature reviews can come from choosing comfortable or aligned databases, excluding recent findings, or using outdated/non-existent sources.
- 6
Misrepresentation of data includes misleading interpretation, unfounded extrapolation beyond supported populations, and ignoring limitations that should constrain conclusions.
- 7
Comparing pre-specified methods, primary analyses, and limitations between an accurate protocol-based article and a distorted one helps identify misreporting and non-reproducibility risks.