Get AI summaries of any video or article — Sign up free
Why ChatGPT Should Not Be Used for Academic Research thumbnail

Why ChatGPT Should Not Be Used for Academic Research

Research and Analysis·
4 min read

Based on Research and Analysis's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT can assist with summarizing long articles and drafting writing, but it should not be treated as a trustworthy source generator for academic citations.

Briefing

ChatGPT can speed up parts of academic work—summarizing articles and drafting text—but its outputs are unreliable when they’re used as sources for research claims, especially citations. A key warning emerges from an example on “abductive reasoning” linking Green HRM to organizational identification: ChatGPT can produce a plausible-sounding chain of logic, yet that doesn’t guarantee the underlying evidence exists or is correctly attributed.

In the example, Green HRM is treated as the integration of environmental sustainability principles into human resource management practices, while organizational identification is defined as the degree to which employees identify with and feel committed to their organization. From there, ChatGPT generates a reasoning pathway: Green HRM practices may increase organizational identification because sustainability initiatives can align with employees’ personal values and beliefs, strengthening their sense of connection and commitment. The response also suggests a secondary mechanism—green HR practices can boost employee engagement and motivation, which can further support identification.

The problem appears when the conversation shifts from generating reasoning to supplying academic references. When asked for citations to support the claim, ChatGPT provides specific author-and-year references (including “Khan et al 2016” and “Gang et al 2018”). But a follow-up check using Google Scholar finds that at least one of those citations cannot be located in the scholarly index. That mismatch is treated as a concrete sign that citation lists produced by ChatGPT may be fabricated, inaccurate, or otherwise not verifiable.

The takeaway is practical: even if ChatGPT produces a well-structured argument that can be edited and reused, citations generated by the system should not be accepted at face value. For literature reviews, the safer workflow is to search and verify using established academic databases and the original research articles themselves, rather than relying on ChatGPT’s reference suggestions. The guidance is aimed at keeping researchers “alert” during use—benefiting from drafting and summarization while maintaining strict verification standards for evidence.

Overall, the core finding is a reliability gap: ChatGPT can help with writing and reasoning drafts, but it cannot be trusted as a source generator for academic literature. That distinction matters because literature reviews and evidence-based claims depend on traceable, checkable scholarship; a single incorrect citation can undermine credibility and derail the research process.

Cornell Notes

ChatGPT can be useful for academic tasks like summarizing long articles and drafting text, and it can generate plausible reasoning linking concepts such as Green HRM and organizational identification. However, citations produced by ChatGPT may fail verification. In an example, ChatGPT offered references (e.g., “Khan et al 2016” and “Gang et al 2018”) to support the reasoning, but at least one reference could not be found on Google Scholar. The implication for researchers is clear: treat ChatGPT-generated citations as untrusted until independently checked, and build literature reviews from authentic database searches and original papers.

How does ChatGPT handle the relationship between Green HRM and organizational identification in the example?

It first defines Green HRM as embedding environmental sustainability principles into human resource management practices, and organizational identification as the extent to which employees identify with and feel committed to an organization. It then produces a reasoning pathway: green HR practices may increase organizational identification because sustainability initiatives can align with employees’ personal values and beliefs, strengthening their sense of connection and commitment. It also adds a supporting mechanism—green HR practices can increase employee engagement and motivation, which can further reinforce organizational identification.

Why is the citation step the main risk when using ChatGPT for academic research?

The danger shifts from generating logic to claiming evidence. When asked for citations, ChatGPT provides specific author-and-year references to support the reasoning. But those references may not be real or may not be discoverable in scholarly indexes. In the example, a Google Scholar check fails to locate at least one of the cited works, indicating the citation output is not reliably verifiable.

What does the Google Scholar check demonstrate about ChatGPT’s references?

It demonstrates that at least some citations generated by ChatGPT cannot be confirmed through a standard academic search tool. If a reference cannot be found on Google Scholar, researchers cannot reliably trace it back to the original study, which undermines the credibility of the literature review and any claims built on that citation.

What workflow does the transcript recommend for building a literature review?

It recommends using authentic academic databases and directly consulting research articles rather than relying on ChatGPT for literature review citations. The core practice is verification: if ChatGPT provides references, they should be double-checked independently before being used in a literature review.

How can researchers still use ChatGPT without adopting its citation errors?

Use it for drafting and conceptual work—such as summarizing long texts or generating an argument structure—then replace any system-supplied citations with verified sources found through database searches. The transcript’s emphasis is on keeping reasoning drafts while treating citations as requiring independent confirmation.

Review Questions

  1. What are the two distinct ways ChatGPT can be helpful in the transcript, and how do their reliability levels differ?
  2. In the Green HRM example, what mechanism does ChatGPT propose for why green HR practices could increase organizational identification?
  3. What verification step does the transcript recommend before using any ChatGPT-provided reference in a literature review?

Key Points

  1. 1

    ChatGPT can assist with summarizing long articles and drafting writing, but it should not be treated as a trustworthy source generator for academic citations.

  2. 2

    Plausible academic reasoning produced by ChatGPT may be usable as a draft, but evidence claims still require verification.

  3. 3

    In the Green HRM and organizational identification example, ChatGPT’s logic is coherent, yet its citation suggestions can fail basic traceability checks.

  4. 4

    A Google Scholar search can reveal that ChatGPT-provided references are not discoverable, signaling unreliability.

  5. 5

    Literature reviews should be built from authentic database searches and original research articles, not from ChatGPT’s reference lists.

  6. 6

    Any citations generated by ChatGPT should be double-checked before inclusion in scholarly work.

Highlights

ChatGPT can generate a convincing reasoning chain linking Green HRM to organizational identification, but that does not guarantee the existence of supporting research.
When asked for citations, ChatGPT may produce references that cannot be found on Google Scholar, raising the risk of fabricated or incorrect sourcing.
The safest approach for literature reviews is to search and verify original studies through established academic databases rather than relying on ChatGPT’s bibliography output.

Mentioned

  • Dr Kamrath