Watch this Before you use AI in your dissertation research / Qualitative data analysis with AI
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Qualitative validity depends on transparency through an audit trail that documents how themes were derived from coding.
Briefing
AI-assisted data analysis tools are increasingly popular for qualitative dissertation work, but they often fail at one non-negotiable requirement: producing a transparent audit trail of how results were reached. That gap matters because qualitative validity depends on showing that themes emerged from a structured, traceable process—especially coding—rather than from intuition or “it feels like” interpretations that can reflect bias.
In academia, the expectations for methodological rigor are particularly high. Students and researchers aiming to publish in academic journals are typically required to demonstrate credibility and trustworthiness through transparency. In practice, that means being able to document each step of analysis so an external reader—supervisor, examiner, or reviewer—can follow how codes were created and how those codes were developed into themes. Without that trace, it becomes harder to argue that findings are credible, and easier for concerns about researcher bias to surface, even when intentions are good.
A detailed coding approach is presented as the backbone of both rigor and validity. Coding provides structure and consistency: if the same person (or another analyst) repeats the process, the resulting codes and themes should be broadly similar because the analysis is built step-by-step rather than improvised. This repeatability is framed as a practical safeguard against bias and a way to strengthen confidence in the final thematic conclusions.
Online AI analysis platforms, by contrast, are described as fundamentally mismatched to audit-trail expectations. Even when they generate accurate or impressive outputs, they generally do not expose the intermediate reasoning or the coding workflow. Users may upload data and receive results, but they cannot inspect, reproduce, or expand the analysis in a way that supports transparency. The result is a “black box” problem: useful outputs without the documentation needed for academic scrutiny or for independent verification.
ChatGPT is singled out as an example where some audit-trail-like control can be created through an interactive workflow—issuing commands and monitoring the back-and-forth. Still, consistency is flagged as a limitation: the same prompts may not reliably yield the same results, which undermines the stability that validity demands.
The practical recommendation is to prefer professional qualitative data analysis software that keeps the analyst in control of decisions and preserves a full audit trail. Established tools named include MAXQDA, NVivo, and ATLAS.ti. Which one to choose depends on access: researchers affiliated with an institution should use the platform their institution provides. For independent researchers without institutional access, the guidance favors ATLAS.ti as a currently reliable, professional, and user-friendly option, while acknowledging that the core decision should be driven by the ability to maintain transparency and rigor throughout coding and theme development.
Cornell Notes
Qualitative validity hinges on transparency—especially an audit trail showing how themes were produced from a structured coding process. Many AI data-analysis platforms can generate results, but they usually do not provide access to the intermediate steps (like coding), making it difficult to reproduce or verify findings. Detailed coding is presented as a way to reduce bias and improve consistency, since repeating the process should yield similar codes and themes. ChatGPT can be used more interactively to create some traceability, but consistency remains a challenge. For dissertation-level work, the guidance favors established qualitative analysis software (MAXQDA, NVivo, ATLAS.ti) where the analyst controls the workflow and can document every step.
What is “audit trail,” and why is it central to qualitative validity?
How does structured coding strengthen validity beyond transparency?
Why do many online AI qualitative analysis platforms fall short for dissertation research?
How can ChatGPT be used to approximate an audit trail, and what remains problematic?
What software choice strategy is recommended for qualitative analysis?
Review Questions
- How would you justify the credibility of your themes to an external reviewer using the concept of audit trail?
- What specific weaknesses arise when an analysis tool provides results without exposing coding steps?
- Why might repeating a structured coding workflow produce more defensible findings than relying on intuition?
Key Points
- 1
Qualitative validity depends on transparency through an audit trail that documents how themes were derived from coding.
- 2
Detailed, structured coding reduces bias risk by preventing “jumping to conclusions” from raw data.
- 3
Structured coding improves consistency, so repeating the analysis should yield similar codes and themes.
- 4
Many online AI analysis platforms lack audit-trail access, functioning more like black boxes even when outputs look strong.
- 5
ChatGPT can be used interactively to create some traceability, but consistency is a major limitation.
- 6
For dissertation research, prefer established qualitative analysis software that keeps the analyst in control of decisions and documentation.
- 7
Choose software based on access (institutional licenses first); otherwise, ATLAS.ti is recommended as a reliable option.