Get AI summaries of any video or article — Sign up free
Is it a good idea to rewrite using ChatGPT? Experiment with Turnitin new AI detection tool thumbnail

Is it a good idea to rewrite using ChatGPT? Experiment with Turnitin new AI detection tool

Research and Analysis·
4 min read

Based on Research and Analysis's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A self-written assignment initially showed 0% AI detection in Turnitin before any ChatGPT assistance.

Briefing

Using ChatGPT to rewrite an assignment in a more “academic” style can trigger AI-detection flags in Turnitin, even when the original text was written by the student. In a controlled test, the creator started with a self-written article that showed 0% AI detection in Turnitin (with an overall similarity report around 12%). After pasting that same text into ChatGPT with instructions to improve grammar, structure, and rewrite it in a more academic way, the revised submission was resubmitted to Turnitin and returned an AI detection score of about 5%, with portions of the revised text highlighted as “AI written.”

A second test separated “rewriting” from “editing.” This time, the original text was not fully rewritten into a new academic version. Instead, ChatGPT was used only to improve grammar, structure, and punctuation—essentially acting as a polishing tool rather than a rewriter. When that edited version was submitted to Turnitin, the AI detection remained at 0%. The contrast between the two runs suggests that the detection risk rises when ChatGPT produces a substantially rephrased, stylistically transformed output, even if the underlying content is the student’s own.

The practical takeaway is less about banning AI tools and more about controlling how they’re used. Light-touch editing—fixing grammar, punctuation, and sentence structure—appears to be safer in this experiment than asking for a full rewrite into a different academic voice. The results also imply that Turnitin’s AI-detection system may respond to patterns associated with generated text, which become more likely when the model is instructed to rewrite extensively.

Overall, the experiment frames a clear decision rule for students: if ChatGPT is used to rewrite the text into a new form, AI detection may increase; if it’s used only for mechanical improvements (grammar, punctuation, and minor structure), AI detection can stay at zero. The message is cautionary—students should be deliberate about prompts and avoid broad “rewrite” instructions when submitting work that must pass AI-detection checks.

Cornell Notes

The test compared two ways of using ChatGPT on a student-written assignment and then checking the result with Turnitin’s AI detection. The original text showed 0% AI detection. After ChatGPT rewrote the text into a more academic style, Turnitin flagged parts of the submission as AI written and the AI detection rose to about 5%. A separate run used ChatGPT only for grammar, structure, and punctuation edits without rewriting the text’s overall form, and Turnitin again reported 0% AI detection. The key implication is that extensive rewriting prompts increase AI-detection risk, while light editing appears safer.

What happened when the original student-written text was rewritten into a more academic style using ChatGPT?

The original submission showed 0% AI detection in Turnitin. After the text was pasted into ChatGPT with instructions to improve grammar and structure and rewrite it in a more academic way, the revised version was resubmitted. Turnitin then reported AI detection of about 5%, and highlighted sections of the revised text as “AI written.”

How did the results differ when ChatGPT was used only for grammar, structure, and punctuation edits?

In the second experiment, ChatGPT was used to improve grammar, structure, and punctuation without performing a full rewrite into a new academic voice. After replacing only the introduction with this edited version and submitting again, Turnitin’s AI detection remained at 0%.

What does the comparison suggest about what triggers AI detection in Turnitin?

The contrast between the two runs suggests that AI detection increases when ChatGPT output involves substantial rephrasing and stylistic transformation (a “rewrite” into an academic style). When the model is used for light-touch editing—fixing grammar, punctuation, and minor structure—the output may retain enough of the original phrasing patterns to avoid AI flags in this test.

Why does the similarity percentage matter less than the AI detection score in this experiment?

The first submission had a similarity report around 12% but 0% AI detection. The later rewritten submission showed AI detection around 5% even though the similarity report wasn’t the focus. The experiment’s decision point is the AI detection percentage and highlighted “AI written” text, not similarity alone.

What prompt behavior should a student avoid if they want to minimize AI detection risk based on these results?

Avoid prompts that ask for a full rewrite “in a more Academic Way,” because that instruction produced a version flagged by Turnitin as AI written. Instead, use prompts that request targeted editing—grammar, punctuation, and structure—without asking for a wholesale rewrite.

Review Questions

  1. In the experiment, what specific change in ChatGPT instructions led to AI detection rising from 0% to about 5%?
  2. What editing scope kept AI detection at 0% in the second submission, and how was it different from the first approach?
  3. How should a student decide between “rewriting” and “editing” when using ChatGPT for an assignment that must pass AI detection?

Key Points

  1. 1

    A self-written assignment initially showed 0% AI detection in Turnitin before any ChatGPT assistance.

  2. 2

    Rewriting the student’s text into a more academic style using ChatGPT increased Turnitin’s AI detection to about 5%.

  3. 3

    Turnitin highlighted portions of the rewritten text as “AI written,” indicating the detection system flagged generated-like patterns.

  4. 4

    Using ChatGPT only to improve grammar, structure, and punctuation—without a full rewrite—kept Turnitin AI detection at 0%.

  5. 5

    The experiment suggests AI detection risk rises with extensive rephrasing and stylistic transformation rather than minor editing.

  6. 6

    Students should use ChatGPT as a polishing tool (grammar/punctuation) rather than a full rewrite engine when AI detection is a concern.

Highlights

Turnitin reported 0% AI detection for the original student-written text.
After ChatGPT rewrote the text into a more academic style, Turnitin AI detection rose to about 5%.
When ChatGPT was limited to grammar, structure, and punctuation edits, Turnitin AI detection stayed at 0%.
The biggest difference between the two outcomes was the extent of rewriting versus targeted editing.

Topics

Mentioned