Get AI summaries of any video or article — Sign up free
AI Publishing Policies for Authors || AI Tools Policies of Journals and Publishers || Hindi thumbnail

AI Publishing Policies for Authors || AI Tools Policies of Journals and Publishers || Hindi

eSupport for Research·
5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI tools are generally permitted only for limited, declared purposes such as improving language, readability, and proofreading—not for replacing authorship or generating research claims without accountability.

Briefing

AI tools are increasingly showing up in manuscript writing, review workflows, and editorial handling—but journal and publisher rules are tightening around disclosure, authorship, and confidentiality. The central takeaway is straightforward: generative AI can be used in limited, permitted ways (often for language improvement), yet it must be transparently declared, never used to fabricate or “generate” review work, and never treated as an author. That matters because peer review integrity and research accountability depend on knowing what was produced by humans versus automated systems, and on preventing confidential or unpublished information from leaking into third-party AI platforms.

A practical way to read the policies is by role. For editors and reviewers, the transcript emphasizes a common pattern: there are typically no blanket “AI is allowed” rules, and reviewers are generally prohibited from using AI tools to generate manuscripts or to produce review reports. The concern is that AI can reproduce or invent content, and—more importantly—reviewers may upload manuscripts or draft assessments into AI systems that could expose unpublished work or sensitive information. For authors, the rules are more permissive but still conditional: generative AI use is allowed only to improve language, readability, and proofing before submission, not to replace authorship responsibilities or to generate content without disclosure.

Disclosure is the thread that ties the policies together. Authors are expected to declare AI tool usage in the manuscript (and often also in cover letters), including which tool was used (the transcript gives ChatGPT as an example), where it was used, and why it was used. The guidance also stresses that AI-generated content must be acknowledged as such and that authors must remain accountable for accuracy and validity. In other words, AI can assist with wording, but it cannot be used to bypass responsibility for claims, methods, or results.

The transcript also flags confidentiality and data handling. Unpublished or confidential manuscript information should not be uploaded to AI platforms unless the platform’s data protections are clearly appropriate. Even when a tool is widely used, the policy logic remains: if the AI system learns from or stores inputs, that can create ethical and legal risk—especially for work that could be patentable, copyrighted, or otherwise sensitive.

Several publisher-specific examples are mentioned to show how these principles appear across different outlets, including Elsevier, Taylor & Francis, Oxford University Press, and Springer Nature. The transcript notes that many journals align with broader frameworks such as the COPE (Committee on Publication Ethics) guidance and that additional publisher policies may adapt over time. It also highlights a key authorship boundary: AI tools should not be listed as authors, and any AI-assisted work must be properly acknowledged.

Finally, the transcript frames this as an ongoing compliance task. Policies change as detection tools, AI capabilities, and ethics expectations evolve. The recommended approach is to keep updating one’s knowledge, follow each journal’s AI disclosure requirements, and treat peer review integrity as non-negotiable—especially by avoiding AI-generated review reports and by protecting unpublished information.

Cornell Notes

Journals and publishers are tightening rules for using AI tools in writing and peer review. The core requirement is transparency: authors must disclose any generative AI use (including which tool, where it was used, and why), typically in the manuscript and sometimes in the cover letter. AI use is generally allowed only for limited purposes such as improving language, readability, and proofreading—not for generating or replacing authorship, and not for fabricating review reports. Reviewers are commonly expected not to use AI to generate manuscripts or review assessments, and confidential or unpublished information should not be uploaded to AI platforms without appropriate safeguards. These policies often align with COPE-style research integrity and publishing ethics guidance and can vary by publisher, so authors must check each journal’s requirements.

Why do policies insist that AI tools must not be listed as authors?

The transcript draws a clear boundary between assistance and responsibility. AI tools can help with language or other permitted support, but they cannot be held accountable for the research’s accuracy, validity, or ethical compliance. As a result, AI is not eligible for authorship, and any AI-assisted content must be acknowledged in the manuscript so human authors remain accountable for what appears in the paper.

What does “disclose AI use” mean in practice for authors?

Disclosure is not just a yes/no statement. The transcript emphasizes declaring which generative AI tool was used (ChatGPT is given as an example), where it was used in the writing process, and why it was used. It also notes that disclosure may be required in both the manuscript and the cover letter during submission, and that authors must keep the declaration aligned with the COPE-style expectations for transparency and accountability.

How do the rules treat AI use by reviewers and editors compared with authors?

Authors face conditional permission—often limited to language improvement—paired with mandatory disclosure. Reviewers are typically restricted from using AI to generate manuscripts or to produce review reports. The transcript highlights that reviewers should avoid uploading manuscripts or drafts into AI systems, because that can leak unpublished information and can undermine the integrity of the peer review process.

What confidentiality risks come up when using AI tools with manuscript files?

The transcript warns against uploading unpublished or confidential manuscript information to AI platforms, especially if the platform stores inputs or uses them in ways that could expose sensitive work. It notes that unpublished content could include material that is patentable, copyrighted, or otherwise not meant for public access, so authors and reviewers should avoid unsafe data handling.

What kinds of AI assistance are usually considered acceptable under these policies?

The transcript describes acceptable use as generative AI used to improve language, readability, and proofreading before submission. The key limitation is that AI assistance should enhance writing quality rather than replace the author’s responsibility for the research content. Authors must still ensure the final manuscript is accurate and valid and must declare the tool usage.

How should researchers keep up with changing AI policies across journals and publishers?

The transcript frames AI policy compliance as an ongoing process. Publisher rules can evolve as new guidance appears (including COPE-aligned updates) and as AI capabilities and ethics expectations change. The recommended approach is to check each journal’s current AI policy, update practices when new requirements arrive, and maintain a record of AI tool usage for disclosure.

Review Questions

  1. What specific information must be disclosed when using generative AI in manuscript preparation?
  2. Why are reviewers generally discouraged from using AI to generate review reports?
  3. How do confidentiality and data handling concerns influence whether authors or reviewers should upload manuscript content to AI platforms?

Key Points

  1. 1

    AI tools are generally permitted only for limited, declared purposes such as improving language, readability, and proofreading—not for replacing authorship or generating research claims without accountability.

  2. 2

    Authors must disclose generative AI use, including which tool was used, where it was used, and why, typically in the manuscript and sometimes in the cover letter.

  3. 3

    AI tools should never be listed as authors; human authors remain responsible for accuracy, validity, and ethical compliance.

  4. 4

    Reviewers are commonly prohibited from using AI to generate manuscripts or review reports, and they should avoid uploading manuscripts into AI systems that could expose unpublished work.

  5. 5

    Unpublished or confidential manuscript information should not be uploaded to AI platforms unless data handling safeguards are clearly appropriate.

  6. 6

    Publisher policies vary but often align with COPE-style research integrity and publishing ethics guidance, so each journal’s current rules must be checked.

  7. 7

    AI policy requirements are evolving, so researchers should periodically update their practices as new guidance and publisher rules appear.

Highlights

Generative AI can be used for language improvement, but it must be disclosed and cannot replace human responsibility for research accuracy.
Reviewers are typically expected not to use AI to generate review reports—and not to upload manuscripts into AI systems that could compromise confidentiality.
AI tools are not eligible for authorship; acknowledgment and disclosure keep accountability with human authors.
Confidential and unpublished work (including potentially patentable or copyrighted material) should not be fed into AI platforms without clear safeguards.

Topics

Mentioned

  • COPE