Get AI summaries of any video or article — Sign up free
Write the research Discussion chapter with ChatGPT thumbnail

Write the research Discussion chapter with ChatGPT

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A Discussion chapter must interpret the study’s findings, connect them to the literature already reviewed, and stay aligned with the study’s aims and research questions.

Briefing

A practical workflow for drafting a dissertation “Discussion” chapter with ChatGPT centers on one principle: accuracy comes from feeding the model your study’s structure in small, controlled pieces and then validating its output against your literature and your own ideas. The core claim is that ChatGPT can produce a discussion that aligns with the literature and results—without inventing details—if the user supplies the right inputs and follows a repeatable prompt-and-check process.

The discussion chapter, as framed here, has three non-negotiable jobs. It must be grounded in the study’s findings from the Results chapter, it must connect those findings to the literature already laid out in the Literature Review, and it must remain tied to the study’s aims and research questions. The chapter is likened to “cooking with your ingredients”: Results provide the ingredients, and the Discussion explains what to “cook” with them—why the findings matter, how they fit (or don’t fit) prior work, and what the study contributes.

To keep the model from going off-script, the transcript recommends breaking the source material into parts rather than pasting everything at once. In the example workflow, the user copies key sections such as the study background, research information, and focus, then separates the Results into multiple text files (three in the example) to reduce the chance of errors caused by large, messy inputs. The same approach is presented as a general best practice: smaller, cleaner inputs make it easier to control what the model uses.

The workflow also emphasizes human control. Instead of outsourcing the entire Discussion immediately, the user is encouraged to generate rough ideas first—then use ChatGPT to generate a draft for comparison. That comparison acts as a validation step: ideas that match the user’s expectations can be treated as stronger leads, while claims that sound too confident—especially contribution-to-knowledge statements—should be reviewed carefully and supported with additional literature.

In the demonstrated case, the model’s draft is described as “not bad at all,” with coverage that tracks key themes from the real discussion section, including attitudes and beliefs and drivers related to English medium instruction (EMI). The output reportedly includes implications and recommendations, with implications largely considered plausible, though the user would likely add more literature support and adjust sections as needed.

Finally, the transcript flags a downstream issue: AI-written text may be detected by AI content tools. The suggested remedy isn’t to ignore the risk, but to reshape the draft—reframing, rephrasing, and “humanizing” the language—so the final chapter reflects the researcher’s voice and reasoning rather than a generic model output.

Cornell Notes

The transcript lays out a controlled method for using ChatGPT to draft a dissertation Discussion chapter that stays faithful to the study’s results and the literature. A strong Discussion must (1) interpret findings from the Results chapter, (2) position those findings within the literature already reviewed, and (3) stay aligned with the study aims and research questions. To reduce errors and hallucinations, the workflow recommends splitting inputs into smaller text blocks (e.g., background plus separate Results sections) and using prompts that explicitly tie the draft to the provided literature and results. The user is also urged to generate rough ideas first, then compare ChatGPT’s draft to those ideas and to the literature—especially scrutinizing claims about contributions to knowledge. Because AI-generated drafts may trigger detection tools, the final step is to rewrite in a more human, researcher-specific style.

What makes a “good” Discussion chapter, and how does that shape the prompts for ChatGPT?

A strong Discussion must interpret the study’s findings (from the Results chapter), connect them to the literature previously outlined in the Literature Review, and remain relevant to the study’s aims and research questions. This structure should be mirrored in prompts: ask for a discussion grounded in the provided results and the specified literature, and request that it address the study’s aims—so the model’s output follows the same logic rather than drifting into generic commentary.

Why does splitting the input into smaller pieces matter for accuracy?

Handling large amounts of text at once increases the chance of mistakes. The workflow described uses separate text files—for example, one for background/focus information and multiple files for Results sections—so the model processes cleaner, more targeted inputs. The practical takeaway is that smaller, well-labeled inputs make it easier to control what the model uses and to spot mismatches between the draft and the source material.

How should a researcher use ChatGPT without losing objectivity?

The transcript warns that once ChatGPT produces a draft, it can be hard to stay objective because the researcher may start believing the model’s framing. The recommended safeguard is to generate rough ideas first, then ask ChatGPT for a draft and compare. If ChatGPT’s claims align with the researcher’s own expectations and the literature, they can be treated as stronger; if claims sound overconfident—especially about contributions to knowledge—they should be checked and supported with additional sources.

What kinds of content in the example output were considered especially aligned with the real discussion?

The described draft reportedly matched key themes such as attitudes and beliefs and the drivers central to the study, including English medium instruction (EMI). It also included implications that were viewed as mostly interesting and accurate. The user still flagged that some contribution-to-existing-knowledge claims would need extra caution and more literature support.

What is the recommended approach to recommendations and AI-detection concerns?

Recommendations were described as less preferred because they came out as bullet points, but the fix is straightforward: ask for a different format or rewrite the section in the style the researcher prefers. For AI detection, the transcript advises against submitting the draft as-is; instead, reshape it by reframing, rephrasing, and humanizing the language so the final chapter reflects the researcher’s voice and reasoning.

Review Questions

  1. What three requirements should a Discussion chapter meet, and how can those requirements be translated into a prompt structure?
  2. How does generating rough ideas before using ChatGPT help with validation and maintaining objectivity?
  3. What steps should be taken if ChatGPT includes contribution-to-knowledge claims that feel too confident or insufficiently supported?

Key Points

  1. 1

    A Discussion chapter must interpret the study’s findings, connect them to the literature already reviewed, and stay aligned with the study’s aims and research questions.

  2. 2

    Use a controlled workflow with ChatGPT by providing the study’s background and Results in smaller, separate text blocks to reduce input-related mistakes.

  3. 3

    Generate rough ideas before using ChatGPT, then compare the model’s draft against your own ideas and the literature to validate accuracy.

  4. 4

    Treat claims about contributions to existing knowledge as high-risk: verify them and strengthen them with additional literature where needed.

  5. 5

    Adjust the draft iteratively—add literature support, remove sections that don’t fit, or merge parts to match the structure of your dissertation.

  6. 6

    If AI detection is a concern, rewrite the draft in a human, researcher-specific voice rather than submitting it unchanged.

  7. 7

    Use prompts that explicitly tie the discussion to the provided literature and results, including specifying key topics to emphasize (e.g., attitudes, beliefs, drivers related to EMI).

Highlights

Accuracy improves when the study is fed to ChatGPT in smaller pieces—background and separate Results sections—rather than one large paste.
The Discussion must function like “cooking with ingredients”: Results are the ingredients, and the Discussion explains what they mean in relation to prior literature.
ChatGPT can produce a draft that tracks real discussion themes (including attitudes/beliefs and EMI drivers), but contribution-to-knowledge claims still require careful verification.
AI-written drafts may trigger detection tools, so the practical fix is to reframe and rephrase the output into a human voice.

Topics

Mentioned