Get AI summaries of any video or article — Sign up free
25 Tips to Humanise AI-written text and avoid AI detection thumbnail

25 Tips to Humanise AI-written text and avoid AI detection

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Aim for coherence, but edit to reduce statistical uniformity—vary patterns so the next sentence isn’t easy to predict.

Briefing

AI detectors tend to flag writing that is statistically uniform—texts with predictable sentence patterns, repeated phrasing, and consistent grammar choices. The core fix is manual editing aimed at making the text less predictable: vary structure, vary wording, and remove the “overly polished” academic habits that AI commonly produces. The goal isn’t to make the writing chaotic or incorrect; it’s to break the statistical regularities that detectors look for, including how well the next sentence can be predicted from what came before.

A major emphasis falls on sentence and paragraph structure. Mixing sentence lengths is the first lever: combine shorter sentences into longer, clause-heavy ones, or split long sentences into shorter “punchy” ones—so the rhythm doesn’t repeat. Sentence starters also matter; beginning every sentence the same way is a common tell. Repetition is treated as a broader problem too, including stock research phrasing such as “research shows,” “the theory aligns,” or “something aligns with a theory,” which appear frequently in AI-generated academic prose.

Clause construction and overall layout are framed as another detection risk. AI often sticks to a consistent pattern of simple, compound, and complex clauses, plus a consistent voice (active vs. passive) and tense. The guidance is to alternate these choices while staying coherent—using a mix of active and passive where appropriate, varying tenses that fit the context, and changing paragraph length so not every paragraph is “perfect” in the same way. Visual inspection is also highlighted: within seconds, uniform paragraph and sentence structures can signal AI writing.

The list then targets the language habits that make AI output feel inflated or templated. Avoid “triplets” (lists in threes), reduce redundancy, and cut repeated paraphrases where one clear statement will do. AI is described as overexplaining—rephrasing the same idea across adjacent sentences and adding conclusions that restate what was already said. The advice is to streamline: remove unnecessary concluding summaries at the end of paragraphs, sections, and documents when they merely repeat earlier points.

Word choice and connectors are treated as practical levers. AI tends to overuse sophisticated, complex phrasing and certain connectors like “and,” “in addition,” and “as well,” so the editing process should swap in varied linking words and restructure transitions without making them awkward. The guidance also warns against inflated verbs and AI-favorite terms that feel too “perfect” or overly formal; when possible, replace them with more direct, everyday academic wording. Clichés, phrasal verbs, and unnecessary metaphors are discouraged because they can look like templated stylistic flourishes rather than earned academic language.

Finally, the editing strategy includes reordering logic and sentence focus—using alternative constructions that preserve meaning but change the pattern (e.g., switching from “Many scholars believe…” to “According to some scholars…” or flipping who/what leads the sentence). Light, controlled subjectivity is suggested as an additional human signal in academic contexts—such as noting that an issue is “alarming”—while keeping the overall tone appropriate. The transcript closes by stressing that not all detectors behave identically, so the aim is not to “game” grammar but to reduce repeated terminology and over-delivery through simplification and concision.

Cornell Notes

AI detection is portrayed as a pattern-recognition problem: detectors flag writing that is too uniform and statistically predictable. The editing approach focuses on breaking those regularities by varying sentence length, sentence starters, paragraph length, clause structure, and voice/tense choices while keeping meaning intact. It also targets common AI habits—redundant paraphrasing, overexplaining, repetitive conclusions, triplets, inflated verbs, and overused connectors like “and” or “in addition.” Reordering sentence logic (changing focus or structure without changing meaning) and swapping terminology for more concise, direct phrasing are presented as practical ways to make text harder to predict. The overall aim is coherent writing that doesn’t follow the same template from sentence to sentence.

What does “less predictable” mean in the context of AI detection, and how does editing address it?

The transcript frames detection as statistical: if a system can predict what comes next based on what came before, the text is likely AI-written. Editing therefore targets uniform patterns—repeated sentence rhythms, consistent grammar choices, and templated phrasing—so the next sentence isn’t as easy to forecast. Practically, that means varying sentence length, paragraph length, clause complexity, and connectors, while removing redundancy and repeated structures.

Which structural changes are recommended to avoid a uniform “AI rhythm”?

Key structural levers include mixing sentence lengths (combining short sentences into longer, clause-heavy ones; splitting long sentences into shorter punchy ones), changing sentence starters so they don’t repeat, and varying paragraph length rather than making every paragraph “perfect.” The guidance also calls for varying clause types and sentence logic so consecutive sentences don’t keep using the same pattern.

Why are repetition and stock academic phrasing singled out as detection risks?

Repetition is described as a major signal: AI often reuses favorite structures, words, and connectors, and detectors can latch onto consistent term and phrase recurrence. The transcript gives examples of common AI-like research phrasing such as “research shows,” “the theory aligns,” and “something aligns with a theory.” The fix is to avoid repeated templates by using more direct wording and changing how ideas are expressed.

How should redundancy, paraphrasing, and conclusions be handled?

The transcript emphasizes that AI tends to overexplain by rephrasing the same point within one or two sentences and by adding conclusions that restate earlier content. The recommended edit is to be concise: remove unnecessary paraphrases, cut repeated explanations, and avoid concluding summaries at the end of paragraphs, sections, or documents when they merely repeat what was already said.

What language-level tactics are suggested to reduce the “overly academic” feel?

Suggested tactics include using active voice mainly (while still allowing variation), reducing inflated verbs and AI-favorite terms, and swapping connectors (e.g., not relying heavily on “and,” “in addition,” or “as well”). It also recommends using simpler expressions occasionally and avoiding stylistic clutter like clichés, phrasal verbs, and unnecessary metaphors that can look templated rather than purposeful.

How does reordering logic help while preserving meaning?

The transcript proposes changing sentence structure and focus without changing the underlying meaning. Examples include switching from “Many scholars believe…” to “According to some scholars…,” or starting with what happened versus who did it or what led to an action. The goal is to prevent a consistent pattern across adjacent sentences—AI is described as often sticking to one preferred structure repeatedly.

Review Questions

  1. Which specific sentence-level edits (length, starters, connectors, voice/tense) would you apply first to make a paragraph less statistically uniform?
  2. How would you identify and remove AI-like redundancy, especially repeated paraphrases and “restating” conclusions at the end of sections?
  3. Give two examples of how you could reorder logic or sentence focus while keeping the meaning unchanged.

Key Points

  1. 1

    Aim for coherence, but edit to reduce statistical uniformity—vary patterns so the next sentence isn’t easy to predict.

  2. 2

    Mix sentence lengths and sentence starters; avoid repeating the same rhythm and openings across sentences.

  3. 3

    Change paragraph length and vary clause structure, voice, and tense where context allows to prevent a consistent template feel.

  4. 4

    Cut redundancy: remove unnecessary paraphrasing and overexplaining, and avoid conclusions that simply restate earlier points.

  5. 5

    Reduce repetition of stock academic phrases and consistently used terminology by swapping in more direct, concise wording.

  6. 6

    Limit AI-like list patterns such as triplets and avoid templated phrasing that appears across many sentences.

  7. 7

    Reorder sentence logic and focus (e.g., “according to…” vs. “many scholars believe…”) to preserve meaning while changing structure.

Highlights

AI detection is framed as prediction: if a system can forecast what comes next from the existing pattern, it’s more likely to flag the text.
Uniform paragraph and sentence structure can be spotted quickly—even by visual inspection—so varying paragraph length and sentence rhythm is a primary tactic.
Overexplaining shows up as repeated paraphrases and “conclusion” sentences that restate what was already said; removing that redundancy is a major fix.
Switching sentence focus and logic (without changing meaning) helps break the consistent structures AI tends to reuse.
Conciseness is treated as a core defense: AI is described as over-delivering with inflated verbs, repeated connectors, and overly sophisticated phrasing.

Topics

Mentioned