How to use #ChatGPT for #Literature Review and #Research? With a Word of Caution!
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use ChatGPT as an assistant only after doing foundational reading and learning core research-writing skills.
Briefing
ChatGPT can speed up a literature review by generating definitions, explanations, and candidate references—but it can also derail learning and produce unusable or misfiled content if researchers rely on it before mastering research writing and concepts. The core caution is straightforward: using AI as a shortcut for “what to write, where to write, and how to write” weakens the ability to conduct research and to structure an academic argument.
A practical workflow starts with foundational reading in the target area, then uses ChatGPT as an assistant to extract information. For example, when drafting a literature review on servant leadership, a researcher can ask for multiple definitions and have the model return named scholars and trait-based descriptions. The transcript highlights that ChatGPT may list definitions associated with figures such as Larry Spears and other authors (including references to Greenleaf and Blenard James Entree). But copying definitions without verifying context creates problems: the traits emphasized in an AI-generated definition might not match the traits the researcher plans to measure, creating a conceptualization–measurement mismatch.
To address the reference gap, the workflow includes a second step: asking ChatGPT for citations tied to the definitions it provides. Even then, the transcript warns that references can be wrong—especially in the free version—so researchers should verify citations using tools like Google Scholar. The recommended approach is to take the AI-provided references, search them on Google Scholar, and confirm the correct source before including anything in a paper.
The same pattern applies to “why it matters” questions. Researchers can ask ChatGPT for reasons servant leadership is relevant for modern leaders, but the output still needs to be placed correctly within the paper’s structure—rather than dumped into the introduction or literature review at random. The transcript repeatedly stresses that without knowing what an introduction, theory section, literature review, contributions, methodology, and analysis are supposed to do, AI-generated text becomes more noise than help.
Finally, the transcript offers a concrete example of using ChatGPT to identify measurement scales. A researcher can request “measurement scales to measure servant leadership” and then use the resulting list as a starting point—searching the scales, checking how each paper conceptualizes servant leadership, and aligning the chosen instrument with the study’s variables.
Overall, the message is not to avoid ChatGPT, but to treat it as a research assistant after building the underlying skills: reading deeply, understanding research structure, validating references, and ensuring conceptual and measurement alignment.
Cornell Notes
ChatGPT can assist with literature reviews by generating candidate definitions, explanations, and even lists of measurement scales. The transcript’s key warning is that AI output becomes unreliable or misplaced when a researcher doesn’t already know how to write and how to structure research sections. A safer workflow is: read the literature first, then use ChatGPT to extract information, request references, and verify those references on Google Scholar—especially because the free version can produce incorrect citations. Finally, researchers must check conceptualization–measurement alignment, since AI may highlight traits that don’t match the variables they plan to measure.
How should a researcher use ChatGPT for a literature review on a topic like servant leadership without harming the quality of their work?
Why is conceptualization–measurement alignment a problem when using AI-generated definitions?
What steps are recommended when ChatGPT provides references for definitions?
How can ChatGPT help with the “why it matters” portion of a literature review?
What is the transcript’s approach to using ChatGPT for measurement scales?
Review Questions
- What are the risks of using ChatGPT output without knowing how to structure a literature review and related sections?
- Describe a workflow for verifying AI-provided references and explain why verification is necessary.
- How can a conceptualization–measurement mismatch occur when using AI-generated definitions, and how would you prevent it?
Key Points
- 1
Use ChatGPT as an assistant only after doing foundational reading and learning core research-writing skills.
- 2
Ask targeted questions (e.g., definitions, importance, measurement scales) to generate useful starting material.
- 3
Request citations from ChatGPT, but verify every reference on Google Scholar because citations can be wrong.
- 4
Check conceptualization–measurement alignment so the traits emphasized in definitions match what the study actually measures.
- 5
Place AI-derived content into the correct paper sections; don’t paste text into the introduction or literature review without understanding purpose.
- 6
Treat measurement-scale lists as leads to be validated by reviewing how each source conceptualizes the construct.