Get AI summaries of any video or article — Sign up free
Incorrect References by #ChatGPT! Finding the Right References for Literature thumbnail

Incorrect References by #ChatGPT! Finding the Right References for Literature

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT can generate credible-sounding arguments while still providing incorrect or unverifiable references, so citations must be checked.

Briefing

ChatGPT can generate literature-review-ready arguments, but it also can supply references that don’t actually exist or don’t match the claim being made. That mismatch becomes a serious problem when researchers copy the citations directly into a thesis or paper without verifying them—especially when the argument is plausible but the source details are wrong.

The session frames the core risk around how academic writing is structured. Different sections demand different kinds of content: introductions typically avoid deep variable-by-variable conceptualization and instead set context, while literature reviews are where detailed relationships among constructs belong. If a researcher doesn’t know what belongs where, they’re more likely to accept AI output at face value and then struggle to integrate it correctly into the right part of the manuscript.

To address the referencing problem, the guidance is practical: treat ChatGPT as a drafting assistant for ideas, not as a citation authority. A recommended workflow starts by generating a response with references, then rewriting the argument in one’s own words and using the citations only after checking that the cited papers truly exist and support the specific sentence.

A worked example centers on “entrepreneurial leadership,” “employee engagement,” “creativity,” and “risk-taking.” When the same prompt is run again, the output may provide a full reference list that looks credible and the argument reads as coherent. But the crucial test is whether the referenced authors and titles can be found in databases such as Google Scholar. In the example, one citation attributed to “Poon Chen and Lynn” could not be located with the expected title/author combination, even though the underlying claim sounded right: employees are more likely to feel motivated and committed when an organization values innovation, risk-taking, and creativity.

When a citation fails verification, the fix is to rebuild the reference using keywords from the problematic sentence. The approach is to extract key concepts—such as “motivated,” “innovation,” “risk-taking,” and “creativity”—and then search for papers that contain those terms in the text. The session suggests iteratively shortening or rephrasing the sentence to improve searchability, then opening candidate papers to confirm that the wording and logic align with the claim. Another tactic mentioned is using an “All intext string builder” style query in Google Scholar, including exact phrases in inverted commas, to surface documents where the relevant concepts appear.

The takeaway is not to abandon AI, but to verify and to understand manuscript structure first. Researchers who learn what belongs in each section and who validate citations through keyword searches and database checks can use ChatGPT’s argument generation while avoiding the citation errors that derail academic writing.

Cornell Notes

ChatGPT can produce strong, research-sounding arguments, but it may attach incorrect or unverifiable references. The remedy is to treat AI output as a starting point: rewrite the argument in one’s own words, then verify every citation in databases like Google Scholar. If a cited paper can’t be found or doesn’t match the claim, extract keywords from the sentence (e.g., “motivated,” “innovation,” “risk-taking,” “creativity”) and search for alternative sources that contain those concepts in the text. This workflow prevents plausible-sounding claims from being supported by nonexistent or mismatched references, and it also depends on knowing what content belongs in each thesis section (e.g., deeper construct relationships in the literature review).

Why can ChatGPT’s references be a problem even when the argument sounds correct?

The session highlights a mismatch: the logic of a claim can be sound while the citation details are wrong. In the example, a sentence about employees feeling motivated and committed when organizations value innovation, risk-taking, and creativity was plausible, but the attributed paper (“Poon Chen and Lynn”) could not be found in Google Scholar under the expected title/author combination. That means copying citations without verification can embed nonexistent or irrelevant sources into a thesis.

What workflow helps researchers use ChatGPT without blindly trusting its citations?

Generate an answer with references, then do not copy-paste it directly into the manuscript. Instead, read the argument, summarize it in one’s own words, and verify each cited paper in Google Scholar. If a paper can’t be located or doesn’t support the exact claim, replace it by finding a better match using keywords from the sentence.

How does the session connect referencing errors to knowing where content belongs in a thesis?

It stresses that academic sections have distinct roles. For example, introductions should not include detailed relationship discussions or variable-by-variable conceptualization; those belong in the literature review. If a researcher doesn’t understand this structure, they’re more likely to accept AI output as-is and struggle to integrate it correctly—making citation verification and placement harder.

What should a researcher do when a specific ChatGPT citation can’t be found?

Extract the key concepts from the problematic sentence and search for alternative literature that supports the same idea. The session suggests using keywords like “motivated,” “innovation,” “risk-taking,” and “creativity,” then opening candidate papers to confirm that the sentence’s logic and wording align. It also recommends rephrasing or shortening the sentence to improve search results.

How can Google Scholar keyword searching be used to find matching sources?

The session recommends searching for papers where relevant terms appear in the text, including using phrase search with inverted commas for exact phrases. It also mentions an “All intext string builder” approach to combine multiple keywords (e.g., employees + motivation + innovation) so Google Scholar returns documents containing those terms in-context.

What is the practical difference between using ChatGPT for ideas versus using it for citations?

ChatGPT can help draft coherent arguments, but citations must be validated. The session’s method treats AI as an idea generator: verify existence and relevance of sources, then use those references only after confirming they support the specific claim. This prevents plausible writing from being backed by incorrect bibliographic information.

Review Questions

  1. When a ChatGPT-provided reference cannot be found in Google Scholar, what step-by-step process should be followed to replace it?
  2. Which thesis section should contain detailed construct relationships, and why does that matter for integrating AI-generated content?
  3. How can keyword extraction and in-text phrase searching improve the accuracy of literature references?

Key Points

  1. 1

    ChatGPT can generate credible-sounding arguments while still providing incorrect or unverifiable references, so citations must be checked.

  2. 2

    Do not copy-paste ChatGPT output directly into a research paper; rewrite the argument in your own words and then attach verified sources.

  3. 3

    Understand what belongs in each thesis section (e.g., detailed relationships and construct discussions belong in the literature review, not the introduction).

  4. 4

    If a cited paper can’t be located (or doesn’t match the claim), replace it by searching with keywords extracted from the sentence.

  5. 5

    Use Google Scholar to verify both existence and relevance by opening candidate papers and confirming the logic matches the claim.

  6. 6

    Improve search results by shortening or rephrasing the sentence and using exact-phrase searches (inverted commas) and in-text keyword queries.

Highlights

ChatGPT’s citations can fail even when the underlying argument is logically sound—verification is the difference between usable drafting and academic risk.
A citation attributed to “Poon Chen and Lynn” could not be found in Google Scholar, demonstrating how plausible claims can still be backed by wrong references.
When a reference breaks, the fix is to rebuild it: extract keywords from the claim and search for literature where those concepts appear in-context.
Knowing thesis structure matters: introductions shouldn’t carry the deep variable-relationship work that belongs in the literature review.

Topics