Get AI summaries of any video or article — Sign up free
What do universities say about ethical AI use by students? thumbnail

What do universities say about ethical AI use by students?

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Ethical AI use keeps the student responsible for original intellectual contribution; AI should support, not replace, the student’s work.

Briefing

Universities increasingly treat AI as unavoidable in student work—but they draw a hard line between using AI to support learning and using it to produce “original content” that should come from the student. The practical takeaway is straightforward: ethical AI use means the student remains the intellectual driver, while AI functions as a research assistant, writing coach, or analysis accelerator rather than an invisible replacement.

A major theme across the guidelines Dr Kriukow reviewed is the distinction between AI-generated output and AI-supported work. Relying on AI to write an entire chapter or dissertation is framed as unethical because it substitutes for the student’s intellectual contribution. The same logic applies to data analysis: simply feeding data into ChatGPT and returning results is treated as producing content without the researcher’s involvement. By contrast, using AI to support analysis—when the student directs the process with specific prompts, checks outputs, and stays accountable—can be acceptable. The emphasis is on control, not convenience.

The transcript also maps where AI use tends to be most defensible during the dissertation lifecycle. In the literature review stage, AI can help with brainstorming research gaps, summarizing existing studies, and sorting articles by themes—especially tools like SciSpace that help triage large volumes of papers so students can decide what to read in depth. In writing, AI should not be used as a ghostwriter. Instead, it can generate structure ideas, suggest bullet points, provide feedback on clarity and grammar, and help with rephrasing or transitions—so long as the student reviews and keeps the work aligned with their own argument.

Data analysis is treated as the most sensitive area. The core concern is the lack of an audit trail—evidence of how decisions were made and how the student arrived at results. Full automation is discouraged, but cautious, incremental use is presented as workable: generating initial codes, refining themes, organizing codes into categories, and stress-testing logic in ways similar to intercoder reliability. Even then, the student must be able to demonstrate participation in the process and provide documentation.

Finally, the transcript argues that students should not hide AI use. Some students avoid disclosure out of fear that any mention will trigger suspicion or penalties, including cases where AI detectors falsely flag human-written work. The countermeasure recommended from the university guidance is transparency: reference the tools used, link to them when appropriate, describe the role AI played, and document the workflow. That includes saving prompts and screenshots, showing how AI outputs were incorporated or adapted, and consulting assignment-specific rules from supervisors or lecturers.

The overall message is less about banning AI than about protecting academic integrity and skill development. Students are urged to ask a simple question: does AI use harm learning and professional growth, or does it support the student’s own thinking? Universities highlighted in the review include Newcastle University, Oxford, Stanford University, MacE University, and London School of Hygiene and Tropical Medicine, all of which—despite different emphases—converge on the same principle: AI should assist, not replace, the student’s original work and accountability.

Cornell Notes

Universities increasingly accept that AI will be used in dissertations, but they require students to keep responsibility for original work. Ethical use means AI supports learning and research tasks—like brainstorming, summarizing literature, improving writing clarity, or helping generate and refine coding—without replacing the student’s intellectual contribution. Full outsourcing (e.g., having AI write chapters or return analysis results) is treated as unethical because it removes the student’s role. The strongest recurring requirement is transparency and documentation: students should be able to provide an audit trail, including prompts/screenshots and a clear description of how AI affected outputs. This matters because it protects academic integrity and reduces the risk of accusations based on misunderstandings or detector errors.

What counts as “original content,” and why is that line so important in dissertation ethics?

Original content is framed as work created without the student’s knowledge or involvement—such as letting AI write an entire chapter or dissertation. That kind of output is treated as unethical because it replaces the student’s intellectual contribution. The transcript extends the same idea to analysis: returning results from AI after feeding data in is also treated as producing content without the researcher’s active role. Ethical use keeps the student accountable for the reasoning and decisions behind the final work.

How can AI be used during the literature review without crossing ethical boundaries?

AI can support literature review by helping students explore and organize what exists: brainstorming research gaps, identifying trends, summarizing current research, and sorting articles by themes. Tools like SciSpace are cited as examples that can summarize and help triage papers so students can decide which articles to read themselves. The ethical boundary is that AI helps with exploration and filtering, not that it replaces the student’s engagement with the literature.

What’s the recommended approach to AI in writing and proofreading?

AI should not be used as a ghostwriter. Instead, it can generate ideas for structure, suggest bullet points, provide feedback on clarity/grammar/style, and help with rephrasing and transitions. The transcript warns that over-editing can trigger AI-detector patterns, citing an example where proofreading support led to a 100% AI-generated flag despite the author’s own writing. The practical rule is to use AI for support while keeping the student’s voice, structure, and authorship intact.

Why is data analysis treated differently from writing, and what does “audit trail” mean here?

Data analysis is treated as higher risk because it requires demonstrable validity and traceable decision-making. The transcript highlights the lack of an audit trail as the main problem with fully automated AI analysis. Ethical use allows limited support—like generating initial codes, refining themes, organizing codes into categories, and stress-testing logic—but the student must monitor the process and provide evidence of how they arrived at results. Documentation (prompts, screenshots, workflow) is presented as essential.

What does transparency look like when students worry about AI detectors or institutional suspicion?

Transparency is recommended as the best practice: students should disclose AI use in a way that shows seriousness and accountability. That includes referencing the tools used, providing links, explaining the role AI played, and describing impacts on the final output. The transcript also suggests documenting the workflow (screenshots, prompts) to demonstrate understanding. This approach is meant to prevent later accusations and to counter the risk of false positives from AI detectors.

Which self-check questions help decide whether AI use is ethical?

The transcript emphasizes a learning-focused test: ask whether AI use harms actual learning and skill development. It criticizes the idea of outsourcing an entire dissertation quickly (e.g., bragging about writing it in 20–30 minutes) because it undermines the purpose of paying for education—professional growth and competence. Ethical use is framed as support for learning, not a shortcut to finished text.

Review Questions

  1. How does the transcript distinguish between AI support and AI replacement in both writing and data analysis?
  2. What specific documentation practices are suggested to create an audit trail for AI-assisted work?
  3. Why does the transcript argue that transparency can reduce risk even when AI detectors sometimes misclassify human writing?

Key Points

  1. 1

    Ethical AI use keeps the student responsible for original intellectual contribution; AI should support, not replace, the student’s work.

  2. 2

    Outsourcing dissertation writing or returning AI-generated analysis results without active involvement is treated as unethical.

  3. 3

    AI can be used in literature reviews to brainstorm gaps, summarize studies, and sort articles by themes, including via tools like SciSpace.

  4. 4

    In writing, AI can help with structure ideas, feedback, and rephrasing, but students should avoid letting AI draft the core content.

  5. 5

    Data analysis requires an audit trail; students should use AI incrementally (e.g., initial codes, theme refinement) while monitoring and documenting decisions.

  6. 6

    Transparency is the recommended safeguard: disclose tools and describe AI’s role, and save prompts/screenshots to show accountability.

  7. 7

    Students should judge AI use by whether it supports learning and skill development rather than merely producing a submission quickly.

Highlights

Universities converge on a single ethical line: AI may assist learning and workflow, but it must not generate the student’s original dissertation content.
The biggest technical risk in AI-assisted research is the missing audit trail—students need evidence of how results were reached.
Transparency is framed as protective: disclosing tool use and documenting prompts/screenshots can reduce the chance of later accusations.
AI can support coding and theme refinement, but students must remain the decision-maker and provide traceable documentation.

Mentioned