Incorrect References by #ChatGPT! Finding the Right References for Literature
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
ChatGPT can generate credible-sounding arguments while still providing incorrect or unverifiable references, so citations must be checked.
Briefing
ChatGPT can generate literature-review-ready arguments, but it also can supply references that don’t actually exist or don’t match the claim being made. That mismatch becomes a serious problem when researchers copy the citations directly into a thesis or paper without verifying them—especially when the argument is plausible but the source details are wrong.
The session frames the core risk around how academic writing is structured. Different sections demand different kinds of content: introductions typically avoid deep variable-by-variable conceptualization and instead set context, while literature reviews are where detailed relationships among constructs belong. If a researcher doesn’t know what belongs where, they’re more likely to accept AI output at face value and then struggle to integrate it correctly into the right part of the manuscript.
To address the referencing problem, the guidance is practical: treat ChatGPT as a drafting assistant for ideas, not as a citation authority. A recommended workflow starts by generating a response with references, then rewriting the argument in one’s own words and using the citations only after checking that the cited papers truly exist and support the specific sentence.
A worked example centers on “entrepreneurial leadership,” “employee engagement,” “creativity,” and “risk-taking.” When the same prompt is run again, the output may provide a full reference list that looks credible and the argument reads as coherent. But the crucial test is whether the referenced authors and titles can be found in databases such as Google Scholar. In the example, one citation attributed to “Poon Chen and Lynn” could not be located with the expected title/author combination, even though the underlying claim sounded right: employees are more likely to feel motivated and committed when an organization values innovation, risk-taking, and creativity.
When a citation fails verification, the fix is to rebuild the reference using keywords from the problematic sentence. The approach is to extract key concepts—such as “motivated,” “innovation,” “risk-taking,” and “creativity”—and then search for papers that contain those terms in the text. The session suggests iteratively shortening or rephrasing the sentence to improve searchability, then opening candidate papers to confirm that the wording and logic align with the claim. Another tactic mentioned is using an “All intext string builder” style query in Google Scholar, including exact phrases in inverted commas, to surface documents where the relevant concepts appear.
The takeaway is not to abandon AI, but to verify and to understand manuscript structure first. Researchers who learn what belongs in each section and who validate citations through keyword searches and database checks can use ChatGPT’s argument generation while avoiding the citation errors that derail academic writing.
Cornell Notes
ChatGPT can produce strong, research-sounding arguments, but it may attach incorrect or unverifiable references. The remedy is to treat AI output as a starting point: rewrite the argument in one’s own words, then verify every citation in databases like Google Scholar. If a cited paper can’t be found or doesn’t match the claim, extract keywords from the sentence (e.g., “motivated,” “innovation,” “risk-taking,” “creativity”) and search for alternative sources that contain those concepts in the text. This workflow prevents plausible-sounding claims from being supported by nonexistent or mismatched references, and it also depends on knowing what content belongs in each thesis section (e.g., deeper construct relationships in the literature review).
Why can ChatGPT’s references be a problem even when the argument sounds correct?
What workflow helps researchers use ChatGPT without blindly trusting its citations?
How does the session connect referencing errors to knowing where content belongs in a thesis?
What should a researcher do when a specific ChatGPT citation can’t be found?
How can Google Scholar keyword searching be used to find matching sources?
What is the practical difference between using ChatGPT for ideas versus using it for citations?
Review Questions
- When a ChatGPT-provided reference cannot be found in Google Scholar, what step-by-step process should be followed to replace it?
- Which thesis section should contain detailed construct relationships, and why does that matter for integrating AI-generated content?
- How can keyword extraction and in-text phrase searching improve the accuracy of literature references?
Key Points
- 1
ChatGPT can generate credible-sounding arguments while still providing incorrect or unverifiable references, so citations must be checked.
- 2
Do not copy-paste ChatGPT output directly into a research paper; rewrite the argument in your own words and then attach verified sources.
- 3
Understand what belongs in each thesis section (e.g., detailed relationships and construct discussions belong in the literature review, not the introduction).
- 4
If a cited paper can’t be located (or doesn’t match the claim), replace it by searching with keywords extracted from the sentence.
- 5
Use Google Scholar to verify both existence and relevance by opening candidate papers and confirming the logic matches the claim.
- 6
Improve search results by shortening or rephrasing the sentence and using exact-phrase searches (inverted commas) and in-text keyword queries.