Get AI summaries of any video or article — Sign up free
3 Mistakes to avoid when presenting Qualitative Research findings thumbnail

3 Mistakes to avoid when presenting Qualitative Research findings

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Keep results grounded in a clearly identified source—participant accounts, literature, or interpretation—so readers never lose track of where claims come from.

Briefing

Qualitative findings often fail not because the data are weak, but because the writing blurs what the evidence actually comes from and overreaches beyond what the study can support. Three recurring problems show up in results chapters: unclear sourcing of claims, jumping to big conclusions from thin evidence, and implying causal relationships that qualitative designs usually can’t establish.

The first mistake is lack of clarity about where statements originate. Readers need to know whether a passage reflects participants’ own beliefs and experiences, prior literature, the researcher’s interpretation, or general “facts.” A common pattern is starting a section with a brief summary of a theme, then smoothly shifting from a specific participant’s account (e.g., “Participant 7 explained…”) into broader claims that sound like general knowledge (e.g., “abuse often happens in the UK” or “local people perceive migrants this way”). When that transition isn’t signposted, the chapter becomes hard to trust: it’s unclear whether the text is still reporting participants’ perspectives or has moved into literature-based or generalized assertions. The fix is to keep results and discussion clearly separated unless the chosen structure intentionally blends them, and to repeatedly cue the reader when the content is still grounded in individual or collective participant viewpoints (e.g., “most participants,” “some participants,” or “Participant X believed…”).

The second mistake is making sweeping claims—sometimes immediately after one quote or a small cluster of opinions. Qualitative research can generate meaningful insights, but it rarely supports definitive “this proves” conclusions. A frequent red flag is wording that turns implications into certainty: sentences like “this provides evidence that we should change policy,” “this shows we must do training,” or “this is evidence of a big problem” can overstate what the data justify. Even when the intended goal is practical change, the language should stay cautious and proposition-based—using phrasing such as “based on this, we may consider…” or “it is arguably…”—and should avoid presenting a single participant’s experience as a universal fact.

The third mistake is implying cause-and-effect. Qualitative studies can explore perceived relationships and offer hypotheses, but they are risky for causal claims. The transcript highlights how easily confounding factors can explain patterns that look causal. An example: if all men in a sample love football and all women hate it, it’s tempting to claim gender influences football preferences. Yet the real drivers could be country of origin, age differences, or other contextual variables—factors a qualitative study may not be designed to disentangle. The same caution applies to “test results” or “past experiences” leading to current struggles: qualitative work can suggest plausible mechanisms, but it shouldn’t present them as established causal effects.

As a practical workaround, the transcript recommends member checking. By returning to participants—at different stages and in different forms—to confirm whether interpretations match what they meant, researchers can strengthen confidence in nuanced claims, including perceived causal relationships. Member checking won’t justify nationwide policy certainty, but it can help validate whether a proposed interpretation reflects participants’ own understanding, allowing the writing to stay grounded while still offering thoughtful, testable propositions.

Cornell Notes

The transcript identifies three common ways qualitative results chapters overreach: (1) statements lose their source, making it unclear whether claims come from participants, literature, or interpretation; (2) writers jump from a small amount of evidence to big, policy-level conclusions; and (3) writers imply causal relationships that qualitative designs generally can’t prove. Clear signposting—especially when moving from participant accounts to broader claims—helps readers track what is grounded in data. When making recommendations, the language should stay cautious and framed as propositions rather than facts. Member checking can also help validate whether interpretations (including perceived cause-and-effect) align with what participants actually meant.

How can a results chapter accidentally mix participant evidence with generalized claims, and how should it be corrected?

A common failure mode is starting a theme with a short overview, then moving from a specific participant’s account (e.g., “Participant 7 explained…”) into statements that sound like general knowledge (e.g., “abuse often happens in the UK” or “local people perceive migrants this way”) without signaling the shift. The correction is to keep results and discussion clearly separated unless the structure intentionally blends them, and to repeatedly cue the reader when the text is still reporting participant perspectives—using wording like “most participants,” “some participants,” or “Participant X believed/said.” If the chapter shifts to literature-based or general claims, it should be explicitly framed as such.

Why is it risky to turn a single quote or a handful of opinions into strong conclusions?

Qualitative data can illuminate patterns and meanings, but a small number of accounts usually can’t support certainty-level claims. The transcript warns against sentences that convert implications into facts, such as “this provides evidence that we should change policy” or “this shows there is a big problem.” Even when the goal is meaningful improvement, the writing should use cautious, proposition-based language—e.g., “based on this, we may consider…” or “it is arguably…”—so the recommendation is not presented as an established, universal truth.

What makes causal claims especially problematic in qualitative research?

Causal claims require ruling out alternative explanations, but qualitative studies often aren’t designed for that. The transcript’s football example illustrates the trap: if all men love football and all women hate it, it’s tempting to claim gender causes preference. Yet other variables—such as country of origin where football is less popular, age differences, or other contextual factors—could explain the pattern. Qualitative work can propose plausible mechanisms or perceived relationships, but it should avoid presenting cause-and-effect as settled fact.

How can member checking help with strengthening qualitative claims without overstating them?

Member checking involves contacting participants at stages of the research to confirm whether interpretations match what they meant. The transcript notes that this won’t justify sweeping claims like nationwide policy changes, but it can clarify whether the researcher’s assumptions—such as perceived causal relationships—reflect participants’ own understanding. For instance, asking a participant whether a described factor “played a role” and whether their account supports that interpretation can provide grounded support for a cautious proposition.

What wording choices help keep qualitative recommendations appropriately cautious?

The transcript emphasizes toning down certainty. Instead of “this is evidence that we must change everything,” use language that frames recommendations as options or hypotheses: “based on this, it is arguably…,” “we may consider…,” or “this suggests…” The goal is to distinguish between what participants experienced and what the researcher is proposing as a next step, without presenting the proposal as a proven fact.

Review Questions

  1. In a results chapter, what signals would you look for to determine whether a claim is grounded in participant testimony versus literature or interpretation?
  2. Write two example sentences: one that overstates qualitative evidence into certainty, and one revised version that keeps the same idea but uses proposition-based, cautious language.
  3. What alternative explanations could undermine a seemingly “obvious” causal pattern in a qualitative sample, and how would you reflect that uncertainty in your writing?

Key Points

  1. 1

    Keep results grounded in a clearly identified source—participant accounts, literature, or interpretation—so readers never lose track of where claims come from.

  2. 2

    Avoid blending participant evidence into generalized statements without explicit signposting or clear structural separation from discussion.

  3. 3

    Don’t jump from one quote or a small set of opinions to sweeping, policy-level conclusions presented as facts.

  4. 4

    Use cautious, vague-but-meaningful wording for recommendations (e.g., “may consider,” “arguably,” “based on this”), treating them as propositions rather than certainty.

  5. 5

    Be extremely careful with cause-and-effect language; qualitative studies often can’t rule out confounding explanations.

  6. 6

    Member checking can validate whether interpretations align with participants’ intended meanings, including perceived causal relationships, but it won’t justify universal claims.

Highlights

A major trust problem in qualitative results is unclear sourcing—readers must know when text is still reporting participant views versus shifting to general knowledge or literature.
Big claims often appear right after small evidence; replacing “this proves” language with proposition-based phrasing keeps conclusions defensible.
Causal language is a high-risk shortcut in qualitative writing; patterns that look causal can be explained by context, sampling differences, or confounds.
Member checking can help confirm whether an interpretation—sometimes even a perceived causal link—matches what participants meant, strengthening cautious claims.

Topics

Mentioned