What do universities say about ethical AI use by students?
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Ethical AI use keeps the student responsible for original intellectual contribution; AI should support, not replace, the student’s work.
Briefing
Universities increasingly treat AI as unavoidable in student work—but they draw a hard line between using AI to support learning and using it to produce “original content” that should come from the student. The practical takeaway is straightforward: ethical AI use means the student remains the intellectual driver, while AI functions as a research assistant, writing coach, or analysis accelerator rather than an invisible replacement.
A major theme across the guidelines Dr Kriukow reviewed is the distinction between AI-generated output and AI-supported work. Relying on AI to write an entire chapter or dissertation is framed as unethical because it substitutes for the student’s intellectual contribution. The same logic applies to data analysis: simply feeding data into ChatGPT and returning results is treated as producing content without the researcher’s involvement. By contrast, using AI to support analysis—when the student directs the process with specific prompts, checks outputs, and stays accountable—can be acceptable. The emphasis is on control, not convenience.
The transcript also maps where AI use tends to be most defensible during the dissertation lifecycle. In the literature review stage, AI can help with brainstorming research gaps, summarizing existing studies, and sorting articles by themes—especially tools like SciSpace that help triage large volumes of papers so students can decide what to read in depth. In writing, AI should not be used as a ghostwriter. Instead, it can generate structure ideas, suggest bullet points, provide feedback on clarity and grammar, and help with rephrasing or transitions—so long as the student reviews and keeps the work aligned with their own argument.
Data analysis is treated as the most sensitive area. The core concern is the lack of an audit trail—evidence of how decisions were made and how the student arrived at results. Full automation is discouraged, but cautious, incremental use is presented as workable: generating initial codes, refining themes, organizing codes into categories, and stress-testing logic in ways similar to intercoder reliability. Even then, the student must be able to demonstrate participation in the process and provide documentation.
Finally, the transcript argues that students should not hide AI use. Some students avoid disclosure out of fear that any mention will trigger suspicion or penalties, including cases where AI detectors falsely flag human-written work. The countermeasure recommended from the university guidance is transparency: reference the tools used, link to them when appropriate, describe the role AI played, and document the workflow. That includes saving prompts and screenshots, showing how AI outputs were incorporated or adapted, and consulting assignment-specific rules from supervisors or lecturers.
The overall message is less about banning AI than about protecting academic integrity and skill development. Students are urged to ask a simple question: does AI use harm learning and professional growth, or does it support the student’s own thinking? Universities highlighted in the review include Newcastle University, Oxford, Stanford University, MacE University, and London School of Hygiene and Tropical Medicine, all of which—despite different emphases—converge on the same principle: AI should assist, not replace, the student’s original work and accountability.
Cornell Notes
Universities increasingly accept that AI will be used in dissertations, but they require students to keep responsibility for original work. Ethical use means AI supports learning and research tasks—like brainstorming, summarizing literature, improving writing clarity, or helping generate and refine coding—without replacing the student’s intellectual contribution. Full outsourcing (e.g., having AI write chapters or return analysis results) is treated as unethical because it removes the student’s role. The strongest recurring requirement is transparency and documentation: students should be able to provide an audit trail, including prompts/screenshots and a clear description of how AI affected outputs. This matters because it protects academic integrity and reduces the risk of accusations based on misunderstandings or detector errors.
What counts as “original content,” and why is that line so important in dissertation ethics?
How can AI be used during the literature review without crossing ethical boundaries?
What’s the recommended approach to AI in writing and proofreading?
Why is data analysis treated differently from writing, and what does “audit trail” mean here?
What does transparency look like when students worry about AI detectors or institutional suspicion?
Which self-check questions help decide whether AI use is ethical?
Review Questions
- How does the transcript distinguish between AI support and AI replacement in both writing and data analysis?
- What specific documentation practices are suggested to create an audit trail for AI-assisted work?
- Why does the transcript argue that transparency can reduce risk even when AI detectors sometimes misclassify human writing?
Key Points
- 1
Ethical AI use keeps the student responsible for original intellectual contribution; AI should support, not replace, the student’s work.
- 2
Outsourcing dissertation writing or returning AI-generated analysis results without active involvement is treated as unethical.
- 3
AI can be used in literature reviews to brainstorm gaps, summarize studies, and sort articles by themes, including via tools like SciSpace.
- 4
In writing, AI can help with structure ideas, feedback, and rephrasing, but students should avoid letting AI draft the core content.
- 5
Data analysis requires an audit trail; students should use AI incrementally (e.g., initial codes, theme refinement) while monitoring and documenting decisions.
- 6
Transparency is the recommended safeguard: disclose tools and describe AI’s role, and save prompts/screenshots to show accountability.
- 7
Students should judge AI use by whether it supports learning and skill development rather than merely producing a submission quickly.