Get AI summaries of any video or article — Sign up free
AI Guidelines for Academic Research | Best Practices and Ethical Considerations | Ali MK Hindi thumbnail

AI Guidelines for Academic Research | Best Practices and Ethical Considerations | Ali MK Hindi

SciSpace·
6 min read

Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI can speed up research tasks like summarizing literature and refining research questions, but it must operate under continuous human oversight.

Briefing

AI use in academic research is acceptable—and often useful—when it functions as an assistant under strict human oversight, with transparent disclosure and careful verification of outputs. The central message is that AI can speed up literature scanning, help refine research questions, and support writing workflows, but it cannot be treated as an intellectual author or a substitute for expert judgment. That distinction matters because common failure modes—hallucinated or incorrect citations, copied AI-generated text, and overreliance that erodes researchers’ own voice—can trigger ethical violations, journal rejection, and institutional problems.

The session begins by mapping the anxieties researchers bring to AI: plagiarism concerns, the “border” between assistance and rewriting, hallucinations (including fake references), citation errors, and reliability of AI-based literature searches. Participants also raise fears about detection by plagiarism tools or AI detectors, plus practical issues like incorrect links or AI claiming access to paywalled sources it can’t actually retrieve. A recurring theme is accuracy—especially around references and whether AI outputs can be trusted enough to use.

From there, the talk lays out what AI is genuinely good at. It can summarize existing research, explain concepts and methods outside a researcher’s immediate expertise, and help generate or sharpen research questions. It can also assist with writing tasks such as improving phrasing, restructuring awkward sentences, and brainstorming counterarguments—particularly helpful for non-native English writers. But the guidance is consistent: AI should help with “help write” tasks, not do the writing. Human oversight is non-negotiable, including fact-checking, verifying citations, and ensuring the research still reflects the researcher’s own intellectual work.

The ethical line is framed around authorship and accountability. Major guidance converges on the principle that humans must remain the author and intellectual contributor: authorship requires substantial contributions to research design or conceptualization, drafting and critical revision, and approval of the final work. Since AI cannot intend, judge, or be held accountable, it cannot be an author. The talk also highlights three guideline frameworks: UNESCO’s broad principles (human rights, dignity, fairness, transparency, and oversight), IBM’s AI ethics framework (augmenting human intelligence and protecting privacy/data ownership), and COPE’s AI ethics guidelines for academic publishing. COPE’s emphasis is practical: disclose AI use in materials and methods, verify AI outputs, and ensure peer review remains human-led.

Unethical use is illustrated with concrete examples: submitting AI-generated text as original writing, having AI write entire thesis sections without intellectual input, treating AI summaries as a complete literature review without verification, and feeding confidential qualitative data without anonymization. Ethical alternatives include using AI for outlines, sentence-level editing, brainstorming methodologies, and summarizing papers only as a starting point—then integrating verified insights into one’s own work.

Finally, the session demonstrates SciSpace as a workflow tool. It shows a Google Scholar integration for fast “quick and dirty” scoping, including summary tables and follow-up question suggestions. It also highlights a deeper review mode that refines the research focus through clarification prompts and produces more comprehensive synthesis, while still requiring verification. A key differentiator claimed for SciSpace’s PDF chat feature is that it links claims back to specific parts of the paper, making cross-checking easier than with generic LLM outputs.

The talk closes with policy alignment advice: there is no universal rule on how much AI is allowed, so researchers must check university and journal requirements, disclose AI use, and avoid relying on AI detectors as a guarantee. The takeaway is a simple operating rule—AI is a tool, not an author; the researcher commands the process, critiques the output, and preserves their own academic voice through transparency and verification.

Cornell Notes

The session argues that AI can be used in academic research without compromising integrity when it stays in an assistant role: humans must remain the intellectual drivers, verify outputs, and disclose AI use. AI is positioned as effective for summarizing existing literature, explaining unfamiliar concepts, refining research questions, and supporting writing through editing, paraphrasing, and brainstorming. Ethical boundaries are tied to authorship and accountability—AI cannot be an author because it cannot intend, judge, or be held responsible. COPE-style guidance emphasizes disclosure in materials and methods and full human responsibility for errors like hallucinated references. The practical message is to use AI to speed up workflow, but never to replace expert critical appraisal or the researcher’s own voice.

What are the most common ethical and integrity risks researchers associate with AI in academic work?

The session highlights recurring pitfalls: plagiarism or “rewriting” that crosses into submitting AI-generated text as original work; hallucinations, especially fake or incorrect citations; incorrect links or reference details produced by generic LLMs; and reliability problems in AI-assisted literature searches. It also flags “no disclosure” as a major risk—using AI heavily without transparency can become a compliance issue even if the work is otherwise competent.

Why does the talk insist that AI cannot be an author, and what does authorship require instead?

Authorship is framed as an accountability mechanism. To be an author, a human must make substantial contributions to research design or conceptualization, draft and critically revise the work, and approve the final version (including protocols). AI cannot meet these criteria because it cannot conceive with intent, perform genuine intellectual revision or judgment, or be held accountable for the final output. That’s why AI can assist but cannot be listed as an author.

What kinds of tasks are presented as appropriate uses of AI in research and writing?

AI is presented as useful for: summarizing existing research; explaining concepts and methods outside one’s expertise; helping generate or refine research questions (with the emphasis that it should “help,” not replace starting from the researcher’s own idea); and writing support such as improving fluency, paraphrasing, restructuring wordy sentences, and brainstorming counterarguments. It’s also used for brainstorming methodologies and for scoping literature quickly—provided the researcher verifies everything before using it in a final manuscript.

How does the session define unethical versus ethical literature review behavior?

Unethical behavior is treating AI like a complete literature review engine—asking it to find papers and then accepting summaries without verification, or copying AI-generated review text as if it were the researcher’s own work. Ethical behavior is using AI to locate papers and summarize them as a starting point, then reading and validating key claims, methods, and citations. The researcher still owns the synthesis and must ensure accuracy.

What role does disclosure play, and how should researchers decide what to disclose?

Disclosure is treated as non-negotiable. The talk emphasizes that COPE-style guidance expects AI use to be declared in materials and methods, and that researchers remain accountable for AI outputs even when errors occur. Because policies vary, researchers must check both their university’s rules and the target journal’s AI policy—some require full declaration, some ban AI for certain tasks, and some allow limited use with transparency.

What practical workflow does the SciSpace demo illustrate for literature searching and reading?

The demo shows a Google Scholar integration that quickly screens search results and produces summary tables and follow-up questions. It then demonstrates a “deep review” mode that refines the research focus through clarification prompts and generates a more comprehensive synthesis (including thematic breakdowns). It also highlights PDF chat for summarizing and analyzing papers while pointing back to where information appears in the document, supporting cross-checking. Even with these tools, the session stresses verification and human judgment before incorporating outputs into research.

Review Questions

  1. List three ways AI can assist academic work that the session treats as legitimate, and explain what human oversight still must do.
  2. According to the session’s authorship framework, what specific human actions are required for authorship, and why does that exclude AI?
  3. Describe two examples of unethical AI use in writing or literature review, and contrast each with an ethical alternative.

Key Points

  1. 1

    AI can speed up research tasks like summarizing literature and refining research questions, but it must operate under continuous human oversight.

  2. 2

    Hallucinated or incorrect citations and fake references are major integrity risks; researchers must verify every citation and link before submission.

  3. 3

    AI cannot be an author because it cannot intend, draft/critically revise with accountability, or be held responsible for errors; humans must remain the intellectual contributors.

  4. 4

    Ethical use requires transparency: disclose AI use in materials and methods (or acknowledgements) and follow both university and journal AI policies.

  5. 5

    Unethical behavior includes submitting AI-generated text as original writing, letting AI write entire thesis sections, and treating AI summaries as a complete literature review without verification.

  6. 6

    AI detectors are unreliable for guaranteeing compliance; the safer approach is disclosure plus careful adherence to institutional and journal rules.

  7. 7

    SciSpace is presented as a workflow tool for scoping and synthesis (including Google Scholar integration and PDF chat), but outputs still require fact-checking and reading the underlying papers.

Highlights

The ethical boundary is accountability: humans must remain the author and intellectual contributor, while AI can only assist.
COPE-style guidance centers on disclosure in materials and methods and full human responsibility for AI errors, including hallucinated references.
SciSpace’s PDF chat is valued because it ties summaries back to specific parts of the paper, making verification easier than with generic LLM outputs.
The session treats “quick and dirty” literature scoping as acceptable, but copying AI-generated summaries into a final literature review is plagiarism.
There is no universal AI policy; researchers must check their university and target journal, and disclosure is the safest compliance strategy.

Topics

  • AI in Academic Research
  • Ethical Authorship
  • Hallucinated Citations
  • Literature Review Workflows
  • SciSpace Demo

Mentioned