Get AI summaries of any video or article — Sign up free
Research With ChatGPT - How to use #ChatGPT for Writing Research Contributions? thumbnail

Research With ChatGPT - How to use #ChatGPT for Writing Research Contributions?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT can draft research contributions with better structure and English, but it cannot replace understanding what contributions are and how they must fit the paper.

Briefing

ChatGPT can help turn rough research ideas into cleaner, more formally worded “research contributions,” but it cannot replace the researcher’s job of knowing what contributions are, where they belong in a paper, and how they must map to the study’s actual variables and relationships. The core warning is straightforward: if someone doesn’t already understand the structure and purpose of research contributions, an AI tool will only produce polished text that may still miss the required substance.

The practical workflow starts with four inputs. First, provide the study title. Second, paste the “new and original” elements—specifically the relationships tested in the research. Third, supply the theoretical lens used (the transcript frames this as the theory or theoretical lens). Fourth, use a prompt that explicitly asks for contributions in multiple categories, including contributions to the literature, contributions to the theory, and contributions to the practical use of the theory in the health sector.

When ChatGPT is given these details, it generates contribution statements that connect the study to existing scholarship and to a specific context. In the example output, the contributions to the literature are framed around the health sector during the COVID-19 pandemic, arguing that the study offers a new understanding of how “knowledge-oriented leadership” affects team outcomes in crisis situations. It also positions the work as extending a theoretical framework—described in the transcript as “resource based” (resource-based view)—from its traditional business domain into health-sector settings.

The transcript then highlights a key pitfall: contributions tied to the study’s original relationships may be missing if the prompt or inputs don’t force the model to address them explicitly. After reviewing the generated text, the user notices that while theory and context-based contributions appear, the “contribution pertinent to the new relationships” is not clearly present. This gap matters because research contributions should directly reflect the mediating mechanisms and outcome pathways the study claims to test.

To fix that, the user asks follow-up questions that target the missing part—requesting contributions “pertaining to the original relationships” and, if needed, contributions for “every single relationship.” The transcript also notes that ChatGPT can rewrite contributions into a paragraph form, which can be useful when phrasing is the main bottleneck.

Overall, the takeaway is not to outsource thinking. ChatGPT is best treated as a writing and structuring assistant: it can formalize language, improve formatting, and produce draft paragraphs, but the researcher must verify that each contribution category is accurate and that the contribution statements align with the study’s specific relationships, mediators, and outcomes.

Cornell Notes

ChatGPT can draft research contributions in a more formal, well-structured style, but it only works as an assistant. The transcript stresses that researchers must already understand what contributions are, where they appear in a paper, and how they should map to the study’s tested relationships. A suggested workflow uses four inputs: the study title, the original/new relationships tested, the theoretical lens, and a prompt requesting contributions to the literature, to the theory, and to practical use in the health sector. The example output includes context-based literature contributions (health sector during COVID-19) and theory extension (resource-based view beyond business). A major caution follows: contributions tied to the original relationships can be missing unless the prompt explicitly demands them, so follow-up questions should request contributions for each relationship.

Why is ChatGPT not a substitute for learning how research contributions are written?

Because contributions must match the paper’s required structure and the study’s actual tested relationships. The transcript warns that if someone doesn’t know what needs to be written, how it should be written, and why it belongs in the paper, AI-generated text may look polished but still fail to deliver the correct substance. In the example, the model produced literature and theory/context contributions, yet the contribution tied to the original relationships was not clearly present—showing how missing alignment can happen even when wording improves.

What four inputs does the transcript recommend before asking ChatGPT for draft contributions?

The workflow uses: (1) the study title; (2) the new/original research relationships—what was tested; (3) the theoretical lens or theory used; and (4) a prompt that asks for contributions to multiple areas. In the example, the prompt explicitly requests contributions to the literature, to the theory, and to the use of the theory in the health sector.

What kinds of contributions did ChatGPT generate in the example output?

The generated contributions included literature contributions grounded in the health sector during the COVID-19 pandemic, describing how knowledge-oriented leadership influences team outcomes in crisis situations. It also framed theory contributions by extending a resource-based view beyond its traditional business domain into health-sector context. The transcript then notes that these were present, but contributions tied to the original relationships were not clearly articulated.

What problem emerged when the user checked whether contributions matched the original relationships?

The transcript describes a gap: contributions pertinent to the original relationships (the specific pathways involving mediating mechanisms and team outcomes) were missing or not visible. This led to a follow-up request asking where the contributions related to the original relationships were, reinforcing that the prompt must force the model to address those specific linkages.

How can follow-up prompting improve alignment with the study’s relationships?

By explicitly requesting contributions that correspond to the relationships provided. The transcript suggests asking for contributions “pertaining to the original relationships,” and even requesting contributions “for every single relationship.” This directs the model to produce contribution statements for each tested link rather than only offering general context or theory-extension claims.

Review Questions

  1. What information should be included in the prompt to ensure contributions align with the study’s tested relationships rather than only general context?
  2. In the example, what two categories of contributions appeared clearly, and which category was initially missing?
  3. How would you modify a prompt if you needed contributions for each relationship (including mediating mechanisms) instead of a single paragraph?

Key Points

  1. 1

    ChatGPT can draft research contributions with better structure and English, but it cannot replace understanding what contributions are and how they must fit the paper.

  2. 2

    Before prompting, provide the study title, the original/new relationships tested, and the theoretical lens used.

  3. 3

    Use prompts that explicitly request contributions to the literature, to the theory, and to practical use in the relevant sector (e.g., health).

  4. 4

    Always verify that contributions map to the study’s specific relationships; context and theory extensions alone may be insufficient.

  5. 5

    If relationship-specific contributions are missing, follow up by asking for contributions tied to the original relationships or for each relationship individually.

  6. 6

    ChatGPT can rewrite contributions into paragraph form, which helps when phrasing is the main challenge, but accuracy still requires researcher review.

Highlights

ChatGPT can formalize research contributions, but only if the researcher already knows what contributions must contain and how they connect to tested relationships.
In the example, literature and theory/context contributions appeared, yet contributions tied to the original relationships were missing—prompting a targeted follow-up.
Asking for contributions “for every single relationship” is a practical way to force alignment with mediating mechanisms and outcome pathways.

Topics