3 Mistakes to avoid when presenting Qualitative Research findings
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Keep results grounded in a clearly identified source—participant accounts, literature, or interpretation—so readers never lose track of where claims come from.
Briefing
Qualitative findings often fail not because the data are weak, but because the writing blurs what the evidence actually comes from and overreaches beyond what the study can support. Three recurring problems show up in results chapters: unclear sourcing of claims, jumping to big conclusions from thin evidence, and implying causal relationships that qualitative designs usually can’t establish.
The first mistake is lack of clarity about where statements originate. Readers need to know whether a passage reflects participants’ own beliefs and experiences, prior literature, the researcher’s interpretation, or general “facts.” A common pattern is starting a section with a brief summary of a theme, then smoothly shifting from a specific participant’s account (e.g., “Participant 7 explained…”) into broader claims that sound like general knowledge (e.g., “abuse often happens in the UK” or “local people perceive migrants this way”). When that transition isn’t signposted, the chapter becomes hard to trust: it’s unclear whether the text is still reporting participants’ perspectives or has moved into literature-based or generalized assertions. The fix is to keep results and discussion clearly separated unless the chosen structure intentionally blends them, and to repeatedly cue the reader when the content is still grounded in individual or collective participant viewpoints (e.g., “most participants,” “some participants,” or “Participant X believed…”).
The second mistake is making sweeping claims—sometimes immediately after one quote or a small cluster of opinions. Qualitative research can generate meaningful insights, but it rarely supports definitive “this proves” conclusions. A frequent red flag is wording that turns implications into certainty: sentences like “this provides evidence that we should change policy,” “this shows we must do training,” or “this is evidence of a big problem” can overstate what the data justify. Even when the intended goal is practical change, the language should stay cautious and proposition-based—using phrasing such as “based on this, we may consider…” or “it is arguably…”—and should avoid presenting a single participant’s experience as a universal fact.
The third mistake is implying cause-and-effect. Qualitative studies can explore perceived relationships and offer hypotheses, but they are risky for causal claims. The transcript highlights how easily confounding factors can explain patterns that look causal. An example: if all men in a sample love football and all women hate it, it’s tempting to claim gender influences football preferences. Yet the real drivers could be country of origin, age differences, or other contextual variables—factors a qualitative study may not be designed to disentangle. The same caution applies to “test results” or “past experiences” leading to current struggles: qualitative work can suggest plausible mechanisms, but it shouldn’t present them as established causal effects.
As a practical workaround, the transcript recommends member checking. By returning to participants—at different stages and in different forms—to confirm whether interpretations match what they meant, researchers can strengthen confidence in nuanced claims, including perceived causal relationships. Member checking won’t justify nationwide policy certainty, but it can help validate whether a proposed interpretation reflects participants’ own understanding, allowing the writing to stay grounded while still offering thoughtful, testable propositions.
Cornell Notes
The transcript identifies three common ways qualitative results chapters overreach: (1) statements lose their source, making it unclear whether claims come from participants, literature, or interpretation; (2) writers jump from a small amount of evidence to big, policy-level conclusions; and (3) writers imply causal relationships that qualitative designs generally can’t prove. Clear signposting—especially when moving from participant accounts to broader claims—helps readers track what is grounded in data. When making recommendations, the language should stay cautious and framed as propositions rather than facts. Member checking can also help validate whether interpretations (including perceived cause-and-effect) align with what participants actually meant.
How can a results chapter accidentally mix participant evidence with generalized claims, and how should it be corrected?
Why is it risky to turn a single quote or a handful of opinions into strong conclusions?
What makes causal claims especially problematic in qualitative research?
How can member checking help with strengthening qualitative claims without overstating them?
What wording choices help keep qualitative recommendations appropriately cautious?
Review Questions
- In a results chapter, what signals would you look for to determine whether a claim is grounded in participant testimony versus literature or interpretation?
- Write two example sentences: one that overstates qualitative evidence into certainty, and one revised version that keeps the same idea but uses proposition-based, cautious language.
- What alternative explanations could undermine a seemingly “obvious” causal pattern in a qualitative sample, and how would you reflect that uncertainty in your writing?
Key Points
- 1
Keep results grounded in a clearly identified source—participant accounts, literature, or interpretation—so readers never lose track of where claims come from.
- 2
Avoid blending participant evidence into generalized statements without explicit signposting or clear structural separation from discussion.
- 3
Don’t jump from one quote or a small set of opinions to sweeping, policy-level conclusions presented as facts.
- 4
Use cautious, vague-but-meaningful wording for recommendations (e.g., “may consider,” “arguably,” “based on this”), treating them as propositions rather than certainty.
- 5
Be extremely careful with cause-and-effect language; qualitative studies often can’t rule out confounding explanations.
- 6
Member checking can validate whether interpretations align with participants’ intended meanings, including perceived causal relationships, but it won’t justify universal claims.