Get AI summaries of any video or article — Sign up free
Qualitative data analysis - Coding, what to do after coding, how to develop theoretical concepts... thumbnail

Qualitative data analysis - Coding, what to do after coding, how to develop theoretical concepts...

4 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Write clear descriptions for each code, including what it includes and what it captures in the data.

Briefing

Turning coded qualitative data into theoretical concepts starts with disciplined code understanding—then moves into cautious theorizing, and finally demands evidence-testing. The core move is to treat the coding framework as the foundation for theory building: before any abstract claims, the analyst writes clear descriptions of each code, including what it captures and what it excludes. From there, the analysis deepens by mapping where each code appears, when it shows up, and which participants mention the issue versus those who do not. That participant-level patterning becomes the raw material for early theorizing, including why some participants raise certain issues while others remain silent.

Once these code descriptions and distributions are in place, the analyst shifts from individual codes to relationships among them. The process involves looking across the dataset as a whole, describing broader coding categories (more abstract groupings that bundle multiple codes), and then asking whether one category seems to influence another. Importantly, this relationship-building remains an early-stage activity: it’s exploratory, iterative, and grounded in returning to the data to check whether the proposed connections hold up.

A pivotal next step is creating models—often basic, even when the coding framework is already detailed. When the analyst feels stuck despite having a thorough coding system, building a simple model can reframe the work. The model functions like a structured hypothesis: categories and codes are arranged on the page, and arrows are used to experiment with possible causal or directional influences between phenomena. This is described as “playing with the codes,” using interpretation and creativity to generate candidate explanations, even when the ideas are not yet backed by strong evidence.

The model-building phase is explicitly hypothetical, but it is not meant to stay speculative. The analyst then has to challenge the emerging ideas by actively seeking evidence that would dismiss them as well as evidence that would support them. The instruction is to avoid clinging to a theory simply because it feels compelling; instead, the analyst should put equal effort into trying to disprove it. Whether the idea survives or collapses under scrutiny, the payoff is the same: more analysis, deeper familiarity with the dataset, and clearer theoretical concepts that can later be tested against the evidence. In short, theory development is portrayed as a cycle—understand codes, theorize relationships, model possibilities, then rigorously test those possibilities against the data.

Cornell Notes

The path from qualitative coding to theoretical concepts begins with fully understanding the coding framework. The analyst writes code descriptions that explain what each code includes, what happens under it, and where and when it appears, noting which participants mention the issue and which do not. From these patterns, the analyst theorizes possible reasons for differences and then looks for relationships among more abstract coding categories. When stuck, the analyst creates basic models using categories arranged on paper and arrows to test imagined influences between phenomena. Finally, every hypothetical idea must be challenged with evidence—either supported or dismissed—so the analysis deepens and the dataset becomes more familiar.

How does the analyst turn a coding framework into the starting point for theory building?

The analyst begins by writing descriptions of each code—summarizing what the code captures, what it includes, and what it represents in the data. Then the analyst goes deeper by tracking where each code appears and when it appears, including which participants mention the issue described by the code and which participants do not. These participant-level patterns become the basis for early theorizing about why some people raise certain issues while others stay silent.

What does “theorizing” look like after coding, before any strong claims are made?

Theorizing starts as exploratory interpretation. After mapping code occurrences and participant mentions, the analyst considers possible reasons for observed differences (for example, why certain participants did not talk about an issue while others did). The analyst then looks for potential relationships between codes and between broader, more abstract categories that group multiple codes, while repeatedly returning to the data to see whether those relationships plausibly hold.

Why create a model if the coding framework is already detailed?

A detailed coding framework can still leave the analyst unsure what to do next. Creating a basic model helps translate detailed coding into an organized explanation. The analyst arranges categories on the page and experiments with arrows to represent possible influences between categories or phenomena. This model is treated as a first attempt—an interpretive structure that can change as the analysis progresses.

How are hypothetical models supposed to be used during analysis?

Models are intentionally hypothetical and may be based largely on interpretation and creativity rather than strong evidence at first. The purpose is to generate ideas for what to investigate next. The analyst uses the model to propose candidate explanations and then plans to test those explanations against the dataset.

What is the evidence-testing step, and how should it be approached?

After generating ideas from models, the analyst must challenge them with evidence. That means actively trying to dismiss the theory as well as trying to support it. The analyst should not stick with an idea just because it feels good; instead, they should search for evidence that contradicts it. The outcome—support or dismissal—still advances the analysis by increasing familiarity with the data and sharpening theoretical concepts.

Review Questions

  1. What specific information about each code (beyond its label) does the analyst record to support later theorizing?
  2. How does the analyst move from individual codes to broader categories and then to proposed relationships?
  3. Why does the analyst treat model-building as hypothetical, and what must happen before any claim becomes credible?

Key Points

  1. 1

    Write clear descriptions for each code, including what it includes and what it captures in the data.

  2. 2

    Track where and when each code appears, and note which participants mention the issue versus those who do not.

  3. 3

    Use participant-level patterns to generate early theorizing about why differences occur.

  4. 4

    Look across the dataset to propose relationships among broader, more abstract categories, then check those relationships against the data.

  5. 5

    When analysis stalls, create a basic model that arranges categories and uses arrows to test imagined influences.

  6. 6

    Challenge every emerging theoretical idea by searching for both supporting and disconfirming evidence, without clinging to favored explanations.

Highlights

Theory development starts by understanding the coding framework: code descriptions plus where/when codes appear and which participants mention them.
Early theorizing is grounded in patterns—especially differences in participant mentions—before any strong causal claims are made.
Building a simple model with categories and arrows can unlock progress even when coding is already detailed.
Hypothetical ideas must be actively challenged with evidence; support and dismissal both count as progress.

Topics

  • Coding Framework
  • Theoretical Concepts
  • Model Building
  • Evidence Testing
  • Qualitative Analysis