Research With ChatGPT - How to use #ChatGPT for Writing Research Contributions?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
ChatGPT can draft research contributions with better structure and English, but it cannot replace understanding what contributions are and how they must fit the paper.
Briefing
ChatGPT can help turn rough research ideas into cleaner, more formally worded “research contributions,” but it cannot replace the researcher’s job of knowing what contributions are, where they belong in a paper, and how they must map to the study’s actual variables and relationships. The core warning is straightforward: if someone doesn’t already understand the structure and purpose of research contributions, an AI tool will only produce polished text that may still miss the required substance.
The practical workflow starts with four inputs. First, provide the study title. Second, paste the “new and original” elements—specifically the relationships tested in the research. Third, supply the theoretical lens used (the transcript frames this as the theory or theoretical lens). Fourth, use a prompt that explicitly asks for contributions in multiple categories, including contributions to the literature, contributions to the theory, and contributions to the practical use of the theory in the health sector.
When ChatGPT is given these details, it generates contribution statements that connect the study to existing scholarship and to a specific context. In the example output, the contributions to the literature are framed around the health sector during the COVID-19 pandemic, arguing that the study offers a new understanding of how “knowledge-oriented leadership” affects team outcomes in crisis situations. It also positions the work as extending a theoretical framework—described in the transcript as “resource based” (resource-based view)—from its traditional business domain into health-sector settings.
The transcript then highlights a key pitfall: contributions tied to the study’s original relationships may be missing if the prompt or inputs don’t force the model to address them explicitly. After reviewing the generated text, the user notices that while theory and context-based contributions appear, the “contribution pertinent to the new relationships” is not clearly present. This gap matters because research contributions should directly reflect the mediating mechanisms and outcome pathways the study claims to test.
To fix that, the user asks follow-up questions that target the missing part—requesting contributions “pertaining to the original relationships” and, if needed, contributions for “every single relationship.” The transcript also notes that ChatGPT can rewrite contributions into a paragraph form, which can be useful when phrasing is the main bottleneck.
Overall, the takeaway is not to outsource thinking. ChatGPT is best treated as a writing and structuring assistant: it can formalize language, improve formatting, and produce draft paragraphs, but the researcher must verify that each contribution category is accurate and that the contribution statements align with the study’s specific relationships, mediators, and outcomes.
Cornell Notes
ChatGPT can draft research contributions in a more formal, well-structured style, but it only works as an assistant. The transcript stresses that researchers must already understand what contributions are, where they appear in a paper, and how they should map to the study’s tested relationships. A suggested workflow uses four inputs: the study title, the original/new relationships tested, the theoretical lens, and a prompt requesting contributions to the literature, to the theory, and to practical use in the health sector. The example output includes context-based literature contributions (health sector during COVID-19) and theory extension (resource-based view beyond business). A major caution follows: contributions tied to the original relationships can be missing unless the prompt explicitly demands them, so follow-up questions should request contributions for each relationship.
Why is ChatGPT not a substitute for learning how research contributions are written?
What four inputs does the transcript recommend before asking ChatGPT for draft contributions?
What kinds of contributions did ChatGPT generate in the example output?
What problem emerged when the user checked whether contributions matched the original relationships?
How can follow-up prompting improve alignment with the study’s relationships?
Review Questions
- What information should be included in the prompt to ensure contributions align with the study’s tested relationships rather than only general context?
- In the example, what two categories of contributions appeared clearly, and which category was initially missing?
- How would you modify a prompt if you needed contributions for each relationship (including mediating mechanisms) instead of a single paragraph?
Key Points
- 1
ChatGPT can draft research contributions with better structure and English, but it cannot replace understanding what contributions are and how they must fit the paper.
- 2
Before prompting, provide the study title, the original/new relationships tested, and the theoretical lens used.
- 3
Use prompts that explicitly request contributions to the literature, to the theory, and to practical use in the relevant sector (e.g., health).
- 4
Always verify that contributions map to the study’s specific relationships; context and theory extensions alone may be insufficient.
- 5
If relationship-specific contributions are missing, follow up by asking for contributions tied to the original relationships or for each relationship individually.
- 6
ChatGPT can rewrite contributions into paragraph form, which helps when phrasing is the main challenge, but accuracy still requires researcher review.