Write the research Discussion chapter with ChatGPT
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A Discussion chapter must interpret the study’s findings, connect them to the literature already reviewed, and stay aligned with the study’s aims and research questions.
Briefing
A practical workflow for drafting a dissertation “Discussion” chapter with ChatGPT centers on one principle: accuracy comes from feeding the model your study’s structure in small, controlled pieces and then validating its output against your literature and your own ideas. The core claim is that ChatGPT can produce a discussion that aligns with the literature and results—without inventing details—if the user supplies the right inputs and follows a repeatable prompt-and-check process.
The discussion chapter, as framed here, has three non-negotiable jobs. It must be grounded in the study’s findings from the Results chapter, it must connect those findings to the literature already laid out in the Literature Review, and it must remain tied to the study’s aims and research questions. The chapter is likened to “cooking with your ingredients”: Results provide the ingredients, and the Discussion explains what to “cook” with them—why the findings matter, how they fit (or don’t fit) prior work, and what the study contributes.
To keep the model from going off-script, the transcript recommends breaking the source material into parts rather than pasting everything at once. In the example workflow, the user copies key sections such as the study background, research information, and focus, then separates the Results into multiple text files (three in the example) to reduce the chance of errors caused by large, messy inputs. The same approach is presented as a general best practice: smaller, cleaner inputs make it easier to control what the model uses.
The workflow also emphasizes human control. Instead of outsourcing the entire Discussion immediately, the user is encouraged to generate rough ideas first—then use ChatGPT to generate a draft for comparison. That comparison acts as a validation step: ideas that match the user’s expectations can be treated as stronger leads, while claims that sound too confident—especially contribution-to-knowledge statements—should be reviewed carefully and supported with additional literature.
In the demonstrated case, the model’s draft is described as “not bad at all,” with coverage that tracks key themes from the real discussion section, including attitudes and beliefs and drivers related to English medium instruction (EMI). The output reportedly includes implications and recommendations, with implications largely considered plausible, though the user would likely add more literature support and adjust sections as needed.
Finally, the transcript flags a downstream issue: AI-written text may be detected by AI content tools. The suggested remedy isn’t to ignore the risk, but to reshape the draft—reframing, rephrasing, and “humanizing” the language—so the final chapter reflects the researcher’s voice and reasoning rather than a generic model output.
Cornell Notes
The transcript lays out a controlled method for using ChatGPT to draft a dissertation Discussion chapter that stays faithful to the study’s results and the literature. A strong Discussion must (1) interpret findings from the Results chapter, (2) position those findings within the literature already reviewed, and (3) stay aligned with the study aims and research questions. To reduce errors and hallucinations, the workflow recommends splitting inputs into smaller text blocks (e.g., background plus separate Results sections) and using prompts that explicitly tie the draft to the provided literature and results. The user is also urged to generate rough ideas first, then compare ChatGPT’s draft to those ideas and to the literature—especially scrutinizing claims about contributions to knowledge. Because AI-generated drafts may trigger detection tools, the final step is to rewrite in a more human, researcher-specific style.
What makes a “good” Discussion chapter, and how does that shape the prompts for ChatGPT?
Why does splitting the input into smaller pieces matter for accuracy?
How should a researcher use ChatGPT without losing objectivity?
What kinds of content in the example output were considered especially aligned with the real discussion?
What is the recommended approach to recommendations and AI-detection concerns?
Review Questions
- What three requirements should a Discussion chapter meet, and how can those requirements be translated into a prompt structure?
- How does generating rough ideas before using ChatGPT help with validation and maintaining objectivity?
- What steps should be taken if ChatGPT includes contribution-to-knowledge claims that feel too confident or insufficiently supported?
Key Points
- 1
A Discussion chapter must interpret the study’s findings, connect them to the literature already reviewed, and stay aligned with the study’s aims and research questions.
- 2
Use a controlled workflow with ChatGPT by providing the study’s background and Results in smaller, separate text blocks to reduce input-related mistakes.
- 3
Generate rough ideas before using ChatGPT, then compare the model’s draft against your own ideas and the literature to validate accuracy.
- 4
Treat claims about contributions to existing knowledge as high-risk: verify them and strengthen them with additional literature where needed.
- 5
Adjust the draft iteratively—add literature support, remove sections that don’t fit, or merge parts to match the structure of your dissertation.
- 6
If AI detection is a concern, rewrite the draft in a human, researcher-specific voice rather than submitting it unchanged.
- 7
Use prompts that explicitly tie the discussion to the provided literature and results, including specifying key topics to emphasize (e.g., attitudes, beliefs, drivers related to EMI).