Don't use AI for research until you've watched this...NEW Rules
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Disclose any generative AI or AI-assisted technology use in the manuscript, typically in acknowledgements/disclosure statements, and include the tool and large language model used (e.g., GPT-4, Claude).
Briefing
Generative AI can be used in academic research—but only under strict, journal-specific rules that prioritize disclosure, originality, and credibility. The most consequential requirement across major publishers: authors must clearly disclose any AI use, typically in an acknowledgement or disclosure statement, and they must specify the tool and how it was used (including the large language model, such as GPT-4 or Claude). This matters because it lets readers and editors trace what was produced with AI assistance versus what reflects the author’s own judgment.
A second major line is about originality. Multiple journals draw a bright boundary between language support and research fabrication: AI should not be used to create or alter images, and it should not be used to fabricate results or to change the substance of findings. In practice, that means AI can help with wording, structure, and readability, but it can’t be used to invent data, manipulate figures, or “improve” conclusions by adding new claims that the underlying research doesn’t support. Even when AI is used for writing, the guidance emphasizes that authors remain responsible for the accuracy of information and proper referencing—especially because large language models can produce fluent but incorrect or made-up details.
The rules also address authorship and accountability. Large language models such as ChatGPT do not qualify as authors because they can’t be held accountable in the way a human researcher can. Journals explicitly prohibit listing AI systems as authors or co-authors (for example, “ChatGPT at al.” is not acceptable) and require authors to ensure that the work still represents their own ideas. This accountability theme extends to how AI-assisted text is treated: authors must be comfortable that the final claims truly reflect their interpretation of the results.
Finally, peer review is treated as a confidentiality- and expertise-driven process that shouldn’t be outsourced to AI. Reviewers are generally discouraged or prohibited from using AI tools to generate or write review reports, since uploading manuscripts to a model can breach confidentiality and undermine the “peer” aspect of peer review. Some guidance allows limited AI use by editors or reviewers to improve the clarity of written feedback, but only with transparency—declared upon submission of the peer review report—and not as a substitute for evaluating the science.
Taken together, the guidance forms a practical checklist: disclose AI use with specifics, use AI for language and readability rather than research substance, never fabricate or alter research artifacts, keep authorship human, and protect peer review from AI-driven evaluation. The goal isn’t banning AI; it’s preventing it from eroding trust in what research claims and who can be held responsible for those claims.
Cornell Notes
Academic journals increasingly permit generative AI only with guardrails. Authors must disclose AI use (often in acknowledgements/disclosure statements) and describe the tool, the large language model, and how it was used. AI is allowed for language improvements such as readability, spelling, and structuring text, but it must not be used to fabricate or misrepresent research data, alter images, or add unsupported conclusions. Large language models like ChatGPT cannot be listed as authors because they can’t be held accountable. Peer review should not be replaced by AI evaluation; confidentiality and human expertise remain central, with only limited, transparent AI use sometimes allowed for improving feedback writing.
What disclosure do journals typically require when AI is used in manuscript preparation?
Where is the line between acceptable AI assistance and prohibited research misconduct?
Why can’t a large language model be listed as an author?
What does journal guidance say about using AI for peer review?
How do journals handle the risk that AI can produce plausible but incorrect information?
Review Questions
- What specific details should be included in an AI disclosure (tool type and large language model, and how it was used)?
- List three actions that journals generally prohibit AI from doing in research manuscripts.
- Why is peer review treated differently from manuscript writing when it comes to AI use?
Key Points
- 1
Disclose any generative AI or AI-assisted technology use in the manuscript, typically in acknowledgements/disclosure statements, and include the tool and large language model used (e.g., GPT-4, Claude).
- 2
Use AI for language and readability improvements, not for altering research substance or adding unsupported claims.
- 3
Do not use AI to fabricate results, misrepresent primary research data, or create/alter images.
- 4
Keep authorship human: large language models like ChatGPT cannot be listed as authors or co-authors.
- 5
Verify AI-generated information and references yourself, since authors remain responsible for accuracy and citation correctness.
- 6
Avoid using AI to write or generate peer review reports; protect confidentiality and preserve human expertise in evaluating science.
- 7
If AI is used in peer review feedback writing under limited allowances, it must be transparent and declared to the handling editor.