Is it a good idea to rewrite using ChatGPT? Experiment with Turnitin new AI detection tool
Based on Research and Analysis's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A self-written assignment initially showed 0% AI detection in Turnitin before any ChatGPT assistance.
Briefing
Using ChatGPT to rewrite an assignment in a more “academic” style can trigger AI-detection flags in Turnitin, even when the original text was written by the student. In a controlled test, the creator started with a self-written article that showed 0% AI detection in Turnitin (with an overall similarity report around 12%). After pasting that same text into ChatGPT with instructions to improve grammar, structure, and rewrite it in a more academic way, the revised submission was resubmitted to Turnitin and returned an AI detection score of about 5%, with portions of the revised text highlighted as “AI written.”
A second test separated “rewriting” from “editing.” This time, the original text was not fully rewritten into a new academic version. Instead, ChatGPT was used only to improve grammar, structure, and punctuation—essentially acting as a polishing tool rather than a rewriter. When that edited version was submitted to Turnitin, the AI detection remained at 0%. The contrast between the two runs suggests that the detection risk rises when ChatGPT produces a substantially rephrased, stylistically transformed output, even if the underlying content is the student’s own.
The practical takeaway is less about banning AI tools and more about controlling how they’re used. Light-touch editing—fixing grammar, punctuation, and sentence structure—appears to be safer in this experiment than asking for a full rewrite into a different academic voice. The results also imply that Turnitin’s AI-detection system may respond to patterns associated with generated text, which become more likely when the model is instructed to rewrite extensively.
Overall, the experiment frames a clear decision rule for students: if ChatGPT is used to rewrite the text into a new form, AI detection may increase; if it’s used only for mechanical improvements (grammar, punctuation, and minor structure), AI detection can stay at zero. The message is cautionary—students should be deliberate about prompts and avoid broad “rewrite” instructions when submitting work that must pass AI-detection checks.
Cornell Notes
The test compared two ways of using ChatGPT on a student-written assignment and then checking the result with Turnitin’s AI detection. The original text showed 0% AI detection. After ChatGPT rewrote the text into a more academic style, Turnitin flagged parts of the submission as AI written and the AI detection rose to about 5%. A separate run used ChatGPT only for grammar, structure, and punctuation edits without rewriting the text’s overall form, and Turnitin again reported 0% AI detection. The key implication is that extensive rewriting prompts increase AI-detection risk, while light editing appears safer.
What happened when the original student-written text was rewritten into a more academic style using ChatGPT?
How did the results differ when ChatGPT was used only for grammar, structure, and punctuation edits?
What does the comparison suggest about what triggers AI detection in Turnitin?
Why does the similarity percentage matter less than the AI detection score in this experiment?
What prompt behavior should a student avoid if they want to minimize AI detection risk based on these results?
Review Questions
- In the experiment, what specific change in ChatGPT instructions led to AI detection rising from 0% to about 5%?
- What editing scope kept AI detection at 0% in the second submission, and how was it different from the first approach?
- How should a student decide between “rewriting” and “editing” when using ChatGPT for an assignment that must pass AI detection?
Key Points
- 1
A self-written assignment initially showed 0% AI detection in Turnitin before any ChatGPT assistance.
- 2
Rewriting the student’s text into a more academic style using ChatGPT increased Turnitin’s AI detection to about 5%.
- 3
Turnitin highlighted portions of the rewritten text as “AI written,” indicating the detection system flagged generated-like patterns.
- 4
Using ChatGPT only to improve grammar, structure, and punctuation—without a full rewrite—kept Turnitin AI detection at 0%.
- 5
The experiment suggests AI detection risk rises with extensive rephrasing and stylistic transformation rather than minor editing.
- 6
Students should use ChatGPT as a polishing tool (grammar/punctuation) rather than a full rewrite engine when AI detection is a concern.