Get AI summaries of any video or article — Sign up free
Don't use AI for research until you've watched this...NEW Rules thumbnail

Don't use AI for research until you've watched this...NEW Rules

Andy Stapleton·
4 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Disclose any generative AI or AI-assisted technology use in the manuscript, typically in acknowledgements/disclosure statements, and include the tool and large language model used (e.g., GPT-4, Claude).

Briefing

Generative AI can be used in academic research—but only under strict, journal-specific rules that prioritize disclosure, originality, and credibility. The most consequential requirement across major publishers: authors must clearly disclose any AI use, typically in an acknowledgement or disclosure statement, and they must specify the tool and how it was used (including the large language model, such as GPT-4 or Claude). This matters because it lets readers and editors trace what was produced with AI assistance versus what reflects the author’s own judgment.

A second major line is about originality. Multiple journals draw a bright boundary between language support and research fabrication: AI should not be used to create or alter images, and it should not be used to fabricate results or to change the substance of findings. In practice, that means AI can help with wording, structure, and readability, but it can’t be used to invent data, manipulate figures, or “improve” conclusions by adding new claims that the underlying research doesn’t support. Even when AI is used for writing, the guidance emphasizes that authors remain responsible for the accuracy of information and proper referencing—especially because large language models can produce fluent but incorrect or made-up details.

The rules also address authorship and accountability. Large language models such as ChatGPT do not qualify as authors because they can’t be held accountable in the way a human researcher can. Journals explicitly prohibit listing AI systems as authors or co-authors (for example, “ChatGPT at al.” is not acceptable) and require authors to ensure that the work still represents their own ideas. This accountability theme extends to how AI-assisted text is treated: authors must be comfortable that the final claims truly reflect their interpretation of the results.

Finally, peer review is treated as a confidentiality- and expertise-driven process that shouldn’t be outsourced to AI. Reviewers are generally discouraged or prohibited from using AI tools to generate or write review reports, since uploading manuscripts to a model can breach confidentiality and undermine the “peer” aspect of peer review. Some guidance allows limited AI use by editors or reviewers to improve the clarity of written feedback, but only with transparency—declared upon submission of the peer review report—and not as a substitute for evaluating the science.

Taken together, the guidance forms a practical checklist: disclose AI use with specifics, use AI for language and readability rather than research substance, never fabricate or alter research artifacts, keep authorship human, and protect peer review from AI-driven evaluation. The goal isn’t banning AI; it’s preventing it from eroding trust in what research claims and who can be held responsible for those claims.

Cornell Notes

Academic journals increasingly permit generative AI only with guardrails. Authors must disclose AI use (often in acknowledgements/disclosure statements) and describe the tool, the large language model, and how it was used. AI is allowed for language improvements such as readability, spelling, and structuring text, but it must not be used to fabricate or misrepresent research data, alter images, or add unsupported conclusions. Large language models like ChatGPT cannot be listed as authors because they can’t be held accountable. Peer review should not be replaced by AI evaluation; confidentiality and human expertise remain central, with only limited, transparent AI use sometimes allowed for improving feedback writing.

What disclosure do journals typically require when AI is used in manuscript preparation?

Journals commonly require authors to disclose whether generative AI or AI-assisted technology was used, and to do so transparently in a disclosure/acknowledgement section (and sometimes in the methods section depending on the journal). The disclosure should include the type of tool and the specific large language model used—examples mentioned include GPT-4 and Claude—plus how the tool contributed to the manuscript.

Where is the line between acceptable AI assistance and prohibited research misconduct?

The permitted zone is language support: improving readability, academic tone, spelling/grammar, and structuring text so conclusions are communicated clearly. The prohibited zone includes using AI to fabricate results, misrepresent primary research data, alter images, or change the substance of conclusions in ways not supported by the underlying research.

Why can’t a large language model be listed as an author?

Journals treat authorship as an accountability role. Large language models such as ChatGPT do not meet authorship criteria because they can’t be questioned or held responsible for the work’s content in the way a human author can. As a result, AI systems should not be listed as authors or co-authors (and should not appear in author lists).

What does journal guidance say about using AI for peer review?

Reviewers generally should not use AI tools to generate or write their reviews because it can breach confidentiality if manuscript content is uploaded, and it can weaken the “peer” component of peer review. Some stricter guidance allows limited AI use by editors or reviewers to improve the quality of written feedback, but it must be transparent and declared when submitting the peer review report.

How do journals handle the risk that AI can produce plausible but incorrect information?

Guidance emphasizes that authors remain responsible for accuracy and proper referencing. Because large language models can generate fluent text that may be wrong or made up, authors must verify claims and ensure supporting references are correct—treating AI output like a draft that still requires human fact-checking.

Review Questions

  1. What specific details should be included in an AI disclosure (tool type and large language model, and how it was used)?
  2. List three actions that journals generally prohibit AI from doing in research manuscripts.
  3. Why is peer review treated differently from manuscript writing when it comes to AI use?

Key Points

  1. 1

    Disclose any generative AI or AI-assisted technology use in the manuscript, typically in acknowledgements/disclosure statements, and include the tool and large language model used (e.g., GPT-4, Claude).

  2. 2

    Use AI for language and readability improvements, not for altering research substance or adding unsupported claims.

  3. 3

    Do not use AI to fabricate results, misrepresent primary research data, or create/alter images.

  4. 4

    Keep authorship human: large language models like ChatGPT cannot be listed as authors or co-authors.

  5. 5

    Verify AI-generated information and references yourself, since authors remain responsible for accuracy and citation correctness.

  6. 6

    Avoid using AI to write or generate peer review reports; protect confidentiality and preserve human expertise in evaluating science.

  7. 7

    If AI is used in peer review feedback writing under limited allowances, it must be transparent and declared to the handling editor.

Highlights

Disclosure is not optional: journals expect authors to state whether AI was used and to specify the tool and large language model (such as GPT-4 or Claude).
AI can help polish language, but it must not fabricate results, alter images, or reshape conclusions beyond the underlying research.
Large language models like ChatGPT cannot be authors because they can’t be held accountable in the way human researchers can.
Peer review should not be outsourced to AI; confidentiality and the “peer” element remain central, with only limited, transparent AI use sometimes allowed for feedback writing.

Topics

Mentioned