AI Guidelines for Academic Research | Best Practices and Ethical Considerations | Ali MK Hindi
Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI can speed up research tasks like summarizing literature and refining research questions, but it must operate under continuous human oversight.
Briefing
AI use in academic research is acceptable—and often useful—when it functions as an assistant under strict human oversight, with transparent disclosure and careful verification of outputs. The central message is that AI can speed up literature scanning, help refine research questions, and support writing workflows, but it cannot be treated as an intellectual author or a substitute for expert judgment. That distinction matters because common failure modes—hallucinated or incorrect citations, copied AI-generated text, and overreliance that erodes researchers’ own voice—can trigger ethical violations, journal rejection, and institutional problems.
The session begins by mapping the anxieties researchers bring to AI: plagiarism concerns, the “border” between assistance and rewriting, hallucinations (including fake references), citation errors, and reliability of AI-based literature searches. Participants also raise fears about detection by plagiarism tools or AI detectors, plus practical issues like incorrect links or AI claiming access to paywalled sources it can’t actually retrieve. A recurring theme is accuracy—especially around references and whether AI outputs can be trusted enough to use.
From there, the talk lays out what AI is genuinely good at. It can summarize existing research, explain concepts and methods outside a researcher’s immediate expertise, and help generate or sharpen research questions. It can also assist with writing tasks such as improving phrasing, restructuring awkward sentences, and brainstorming counterarguments—particularly helpful for non-native English writers. But the guidance is consistent: AI should help with “help write” tasks, not do the writing. Human oversight is non-negotiable, including fact-checking, verifying citations, and ensuring the research still reflects the researcher’s own intellectual work.
The ethical line is framed around authorship and accountability. Major guidance converges on the principle that humans must remain the author and intellectual contributor: authorship requires substantial contributions to research design or conceptualization, drafting and critical revision, and approval of the final work. Since AI cannot intend, judge, or be held accountable, it cannot be an author. The talk also highlights three guideline frameworks: UNESCO’s broad principles (human rights, dignity, fairness, transparency, and oversight), IBM’s AI ethics framework (augmenting human intelligence and protecting privacy/data ownership), and COPE’s AI ethics guidelines for academic publishing. COPE’s emphasis is practical: disclose AI use in materials and methods, verify AI outputs, and ensure peer review remains human-led.
Unethical use is illustrated with concrete examples: submitting AI-generated text as original writing, having AI write entire thesis sections without intellectual input, treating AI summaries as a complete literature review without verification, and feeding confidential qualitative data without anonymization. Ethical alternatives include using AI for outlines, sentence-level editing, brainstorming methodologies, and summarizing papers only as a starting point—then integrating verified insights into one’s own work.
Finally, the session demonstrates SciSpace as a workflow tool. It shows a Google Scholar integration for fast “quick and dirty” scoping, including summary tables and follow-up question suggestions. It also highlights a deeper review mode that refines the research focus through clarification prompts and produces more comprehensive synthesis, while still requiring verification. A key differentiator claimed for SciSpace’s PDF chat feature is that it links claims back to specific parts of the paper, making cross-checking easier than with generic LLM outputs.
The talk closes with policy alignment advice: there is no universal rule on how much AI is allowed, so researchers must check university and journal requirements, disclose AI use, and avoid relying on AI detectors as a guarantee. The takeaway is a simple operating rule—AI is a tool, not an author; the researcher commands the process, critiques the output, and preserves their own academic voice through transparency and verification.
Cornell Notes
The session argues that AI can be used in academic research without compromising integrity when it stays in an assistant role: humans must remain the intellectual drivers, verify outputs, and disclose AI use. AI is positioned as effective for summarizing existing literature, explaining unfamiliar concepts, refining research questions, and supporting writing through editing, paraphrasing, and brainstorming. Ethical boundaries are tied to authorship and accountability—AI cannot be an author because it cannot intend, judge, or be held responsible. COPE-style guidance emphasizes disclosure in materials and methods and full human responsibility for errors like hallucinated references. The practical message is to use AI to speed up workflow, but never to replace expert critical appraisal or the researcher’s own voice.
What are the most common ethical and integrity risks researchers associate with AI in academic work?
Why does the talk insist that AI cannot be an author, and what does authorship require instead?
What kinds of tasks are presented as appropriate uses of AI in research and writing?
How does the session define unethical versus ethical literature review behavior?
What role does disclosure play, and how should researchers decide what to disclose?
What practical workflow does the SciSpace demo illustrate for literature searching and reading?
Review Questions
- List three ways AI can assist academic work that the session treats as legitimate, and explain what human oversight still must do.
- According to the session’s authorship framework, what specific human actions are required for authorship, and why does that exclude AI?
- Describe two examples of unethical AI use in writing or literature review, and contrast each with an ethical alternative.
Key Points
- 1
AI can speed up research tasks like summarizing literature and refining research questions, but it must operate under continuous human oversight.
- 2
Hallucinated or incorrect citations and fake references are major integrity risks; researchers must verify every citation and link before submission.
- 3
AI cannot be an author because it cannot intend, draft/critically revise with accountability, or be held responsible for errors; humans must remain the intellectual contributors.
- 4
Ethical use requires transparency: disclose AI use in materials and methods (or acknowledgements) and follow both university and journal AI policies.
- 5
Unethical behavior includes submitting AI-generated text as original writing, letting AI write entire thesis sections, and treating AI summaries as a complete literature review without verification.
- 6
AI detectors are unreliable for guaranteeing compliance; the safer approach is disclosure plus careful adherence to institutional and journal rules.
- 7
SciSpace is presented as a workflow tool for scoping and synthesis (including Google Scholar integration and PDF chat), but outputs still require fact-checking and reading the underlying papers.