Get AI summaries of any video or article — Sign up free
How to use #ChatGPT for Discussion Section in a #Research Study thumbnail

How to use #ChatGPT for Discussion Section in a #Research Study

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Begin the discussion by summarizing main findings and briefly restating the study’s objective and rationale to set context.

Briefing

A strong discussion section starts by turning research results into a clear narrative: summarize the main findings, restate the study’s objective and rationale, then interpret each hypothesis in relation to what’s already known and what theory predicts. The core workflow is to begin with a brief recap of the key results and why the study was worth doing, so the rest of the discussion has context. From there, each hypothesis gets its own treatment—describing what the study found, and then comparing and contrasting those findings with prior research to show where the work aligns, diverges, or extends earlier conclusions.

Artificial intelligence can help draft and organize this interpretation, but it must be used cautiously. Asking ChatGPT to compare results with previous studies is acceptable only as a starting point; the references it provides can’t be trusted. The fix is manual verification: search the relevant literature in Google Scholar (or other databases) and confirm that the cited papers actually exist and support the claims being made. Tools such as Elicit can also assist in locating sources, but the responsibility for accuracy remains with the researcher.

The next major ingredient is theory-based interpretation. Instead of stopping at “what happened,” the discussion should explain “why it happened” using an established framework from the literature. For example, when the relationship between corporate social responsibility (CSR) and organizational performance (OP) is the focus, ChatGPT can be prompted to connect the findings to the resource-based view (RBV). In that framing, CSR is treated as a source of valuable, hard-to-imitate resources—such as improved reputation, stronger stakeholder relationships, and other capabilities that are difficult to substitute. The goal is to translate empirical results into theoretically grounded mechanisms.

The same approach applies when results are not significant, which often requires extra care. If prior studies found a significant CSR–OP link but a new study finds insignificance, the discussion should still offer plausible explanations grounded in theory and existing research. Rather than blaming methods or analysis by default, the prompt can steer the model toward substantive reasons—again using RBV or the relevant theory—to explain why the expected relationship might not materialize in the specific context. Even when the reasons sound convincing, they still need literature support, which means returning to Google Scholar to check whether those theoretical explanations have support in published work.

Finally, the discussion should close with implications. ChatGPT can generate practical implications for significant and insignificant findings, but the output should be adapted to the study’s actual results and context. The emphasis throughout is on using AI as writing support—helping structure arguments, propose interpretations, and draft text—while ensuring every claim is accurate, referenced, and consistent with the researcher’s own reading of the literature.

Cornell Notes

A discussion section should (1) summarize the study’s main findings, (2) briefly restate the objective and rationale, and then (3) interpret each hypothesis by comparing results with prior studies and explaining them through relevant theory. ChatGPT can help draft these interpretations, including theory alignment (e.g., linking CSR–organizational performance to resource-based view by describing CSR as a source of valuable, hard-to-imitate resources like reputation and stakeholder relationships). When results are insignificant, the same theory-driven approach can generate plausible explanations, but they must be supported by literature. Because AI-generated citations may be unreliable, references must be verified in Google Scholar or other databases. The final step is to write implications that fit the study’s actual findings rather than copying AI text verbatim.

What are the essential “ingredients” of a research discussion section, in order?

Start by summarizing the main findings and briefly identifying the study’s objective and rationale. Then discuss each hypothesis separately: describe what the study found, compare and contrast those results with previous studies, and interpret the findings using the theory that guided the research. Finally, close with implications tailored to whether results were significant or insignificant.

How can ChatGPT be used to compare a study’s results with earlier research without introducing citation errors?

ChatGPT can be prompted to compare the study’s results with prior studies, but its references can’t be trusted. The researcher should verify every citation by searching for the relevant papers in Google Scholar (or other databases). Tools like Elicit can help locate sources, but the researcher must confirm that the papers exist and support the claims.

How should theory be used to interpret findings—especially with a concrete example?

Theory should explain mechanisms, not just label outcomes. For instance, if CSR significantly affects organizational performance, RBV can be used to argue that CSR provides valuable, hard-to-imitate resources—such as improved reputation and image and better stakeholder relationships—leading to stronger organizational performance. The discussion should connect empirical results to these RBV-based resource mechanisms.

What changes when a study finds an insignificant relationship that prior literature reported as significant?

The discussion should still provide substantive, theory-based reasons for the insignificance. Instead of focusing first on methodological or analytical explanations, prompts can ask for explanations grounded in the same theory (e.g., RBV) for why the expected CSR–OP relationship might not emerge in the study’s context. These reasons still require literature support and must be checked in Google Scholar.

Why is it risky to copy and paste AI-generated text directly into a thesis or paper?

AI output may not match the study’s specific context, results, or framing. The guidance is to use AI to assist writing—drafting structure, generating interpretation ideas, or suggesting implications—then read and revise so the final text fits the researcher’s actual findings and study details.

Review Questions

  1. What sequence of steps should guide writing a discussion section from findings to implications?
  2. How would you verify and correct references produced by ChatGPT when comparing your results to prior studies?
  3. If your CSR–organizational performance hypothesis is insignificant, what theory-based explanation strategy could you use and what must you still do to support it?

Key Points

  1. 1

    Begin the discussion by summarizing main findings and briefly restating the study’s objective and rationale to set context.

  2. 2

    Treat each hypothesis separately by describing results, then comparing and contrasting them with prior studies.

  3. 3

    Use theory to explain mechanisms behind findings, not just to restate outcomes (e.g., RBV framing for CSR as valuable, hard-to-imitate resources).

  4. 4

    Verify any citations generated by AI using Google Scholar or other databases; never rely on AI-provided references unchecked.

  5. 5

    When results are insignificant, provide substantive, theory-based explanations and support them with literature rather than defaulting to methodological blame.

  6. 6

    Use AI to draft and structure text, but adapt and rewrite so it matches the study’s specific context and results.

  7. 7

    Write implications that reflect the actual significance pattern (significant vs. insignificant) and fit the study’s context.

Highlights

A discussion section should move from results to interpretation: recap findings and rationale, then interpret each hypothesis through comparison with prior work and explanation via theory.
ChatGPT can suggest comparisons and theory links, but its references must be verified in Google Scholar because they may be unreliable.
RBV can explain CSR–organizational performance by treating CSR as a source of valuable, hard-to-imitate resources such as reputation and stakeholder relationships.
Insignificant findings still need theory-driven explanations grounded in literature, especially when earlier studies reported significance.
AI should assist writing—not replace it—since copied text may not fit the study’s context and results.

Topics

Mentioned

  • CSR
  • OP
  • RBV