How to use #ChatGPT for Discussion Section in a #Research Study
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Begin the discussion by summarizing main findings and briefly restating the study’s objective and rationale to set context.
Briefing
A strong discussion section starts by turning research results into a clear narrative: summarize the main findings, restate the study’s objective and rationale, then interpret each hypothesis in relation to what’s already known and what theory predicts. The core workflow is to begin with a brief recap of the key results and why the study was worth doing, so the rest of the discussion has context. From there, each hypothesis gets its own treatment—describing what the study found, and then comparing and contrasting those findings with prior research to show where the work aligns, diverges, or extends earlier conclusions.
Artificial intelligence can help draft and organize this interpretation, but it must be used cautiously. Asking ChatGPT to compare results with previous studies is acceptable only as a starting point; the references it provides can’t be trusted. The fix is manual verification: search the relevant literature in Google Scholar (or other databases) and confirm that the cited papers actually exist and support the claims being made. Tools such as Elicit can also assist in locating sources, but the responsibility for accuracy remains with the researcher.
The next major ingredient is theory-based interpretation. Instead of stopping at “what happened,” the discussion should explain “why it happened” using an established framework from the literature. For example, when the relationship between corporate social responsibility (CSR) and organizational performance (OP) is the focus, ChatGPT can be prompted to connect the findings to the resource-based view (RBV). In that framing, CSR is treated as a source of valuable, hard-to-imitate resources—such as improved reputation, stronger stakeholder relationships, and other capabilities that are difficult to substitute. The goal is to translate empirical results into theoretically grounded mechanisms.
The same approach applies when results are not significant, which often requires extra care. If prior studies found a significant CSR–OP link but a new study finds insignificance, the discussion should still offer plausible explanations grounded in theory and existing research. Rather than blaming methods or analysis by default, the prompt can steer the model toward substantive reasons—again using RBV or the relevant theory—to explain why the expected relationship might not materialize in the specific context. Even when the reasons sound convincing, they still need literature support, which means returning to Google Scholar to check whether those theoretical explanations have support in published work.
Finally, the discussion should close with implications. ChatGPT can generate practical implications for significant and insignificant findings, but the output should be adapted to the study’s actual results and context. The emphasis throughout is on using AI as writing support—helping structure arguments, propose interpretations, and draft text—while ensuring every claim is accurate, referenced, and consistent with the researcher’s own reading of the literature.
Cornell Notes
A discussion section should (1) summarize the study’s main findings, (2) briefly restate the objective and rationale, and then (3) interpret each hypothesis by comparing results with prior studies and explaining them through relevant theory. ChatGPT can help draft these interpretations, including theory alignment (e.g., linking CSR–organizational performance to resource-based view by describing CSR as a source of valuable, hard-to-imitate resources like reputation and stakeholder relationships). When results are insignificant, the same theory-driven approach can generate plausible explanations, but they must be supported by literature. Because AI-generated citations may be unreliable, references must be verified in Google Scholar or other databases. The final step is to write implications that fit the study’s actual findings rather than copying AI text verbatim.
What are the essential “ingredients” of a research discussion section, in order?
How can ChatGPT be used to compare a study’s results with earlier research without introducing citation errors?
How should theory be used to interpret findings—especially with a concrete example?
What changes when a study finds an insignificant relationship that prior literature reported as significant?
Why is it risky to copy and paste AI-generated text directly into a thesis or paper?
Review Questions
- What sequence of steps should guide writing a discussion section from findings to implications?
- How would you verify and correct references produced by ChatGPT when comparing your results to prior studies?
- If your CSR–organizational performance hypothesis is insignificant, what theory-based explanation strategy could you use and what must you still do to support it?
Key Points
- 1
Begin the discussion by summarizing main findings and briefly restating the study’s objective and rationale to set context.
- 2
Treat each hypothesis separately by describing results, then comparing and contrasting them with prior studies.
- 3
Use theory to explain mechanisms behind findings, not just to restate outcomes (e.g., RBV framing for CSR as valuable, hard-to-imitate resources).
- 4
Verify any citations generated by AI using Google Scholar or other databases; never rely on AI-provided references unchecked.
- 5
When results are insignificant, provide substantive, theory-based explanations and support them with literature rather than defaulting to methodological blame.
- 6
Use AI to draft and structure text, but adapt and rewrite so it matches the study’s specific context and results.
- 7
Write implications that reflect the actual significance pattern (significant vs. insignificant) and fit the study’s context.