Get AI summaries of any video or article — Sign up free
Learn How to Spot AI Writing in ANY TEXT! thumbnail

Learn How to Spot AI Writing in ANY TEXT!

6 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat “stock language” as a first-pass signal: generic, template-like phrasing that repeats across unrelated topics is a common AI tell.

Briefing

AI-written text can be spotted quickly by looking for a cluster of repeatable writing “tells” that show up across social media, ads, and even academic-style prose. The core idea is that many AI systems generate language by recombining pre-existing patterns—so the result often reads like generic, modular content with built-in drama rather than a naturally flowing argument or narrative.

A broad umbrella for those tells is “stock language”: pre-fabricated phrasing that sounds familiar because it’s used over and over. Instead of developing ideas through careful transitions, AI-fueled writing leans on dramatic, clickbait-like templates such as “He didn’t just do X, he did Y,” “Nobody expected what happened next,” and “It wasn’t just X, it was Y.” It also gravitates toward high-stakes metaphors and scale-setting lines—examples mentioned include “the fabric of space,” plus exaggerated imagery like “even in death” and “his echo is airborne.” These choices create a sense of artificial intensity: the language signals emotion and importance, but the structure often doesn’t earn it.

The most distinctive structural pattern is “modular sentences.” These are sentences and even paragraph blocks that can be rearranged like puzzle pieces without changing the overall meaning. The writing lacks narrative flow: sentences rarely depend on earlier sentences, and connectors such as “which,” “while,” or “because” are used sparingly. The effect is disjointed—something feels off even when the text is technically readable. The transcript gives simple examples where lines like “It helps power a better future,” “No emissions, no fuel, just light,” and “A train in Germany now runs entirely on solar energy” can be shuffled while preserving the message.

Another major tell is the overuse of “narrative hooks,” a rhythm that goes from a dramatic setup to a twist and then a final reflection. The pattern is described as hook → twist → stakes → reflection, often triggered by quote-like openings or reaction phrases. Closely related is “suspense building,” using familiar engagement phrases such as “No one expected what happened next” and “The whole internet was watching. That’s when everything changed.”

AI writing also leans on “capitalizing dramatic concepts,” treating capitalization as a volume knob for importance—terms like “the witness,” “corporate memory,” or other big ideas are highlighted in ways that feel more promotional than analytical.

Finally, the transcript highlights two high-confidence structures: “minimalist triplet phrasing” and “reversal structures.” Triplets appear as rhythmic lists of three short elements, sometimes separated by full stops, and they show up even in contexts where a human would vary the cadence. Reversal structures flip from what something isn’t to what it is—often without the usual human caution or nuance—using constructions like “They don’t just carry blood; they become part of you.” These devices frequently overlap in the same post, which is why the “once you see it, you can’t unsee it” effect is emphasized.

The practical takeaway is that spotting AI writing isn’t about one single marker (like m-dashes). It’s about recognizing a repeatable toolkit: generic stock phrasing, modular disconnection, clickbait-style hooks and suspense, dramatic capitalization, triplet rhythm, and reversal flips—patterns that can appear in everything from property ads to scientific-sounding claims and social comments.

Cornell Notes

The transcript lays out a set of repeatable writing patterns that often signal AI-generated text. A key umbrella term is “stock language,” meaning generic, pre-fabricated phrasing that shows up across topics and platforms. It then narrows to structural tells: “modular sentences” that can be rearranged without changing meaning, “narrative hooks” that follow a hook→twist→stakes→reflection rhythm, and “suspense building” using familiar clickbait phrases. Two high-confidence markers are “minimalist triplet phrasing” (rhythmic lists of three) and “reversal structures” (flipping from what something isn’t to what it is). Recognizing these clusters matters because they can appear even in academic-style writing, not just entertainment posts.

What does “stock language” mean, and why does it matter for detecting AI writing?

“Stock language” refers to pre-fabricated, generic phrasing that gets reused across many contexts. The transcript links it to how language models train on common templates, so the output overuses familiar constructions. Examples include dramatic contrast templates like “He didn’t just do X, he did Y,” “Nobody expected what happened next,” and “It wasn’t just X, it was Y.” It also points to recurring metaphorical or scale-setting lines (e.g., “the fabric of space,” and other exaggerated imagery) that can feel emotionally intense but structurally unearned.

How do “modular sentences” create the feeling that something is “off”?

Modular sentences are written in blocks that can be moved around like puzzle pieces without changing the overall meaning. The transcript emphasizes that these sentences often don’t rely on previous sentences to build an argument, so there’s little narrative flow. Connectors are used sparingly (few “which,” “while,” “because” links), making the text read like a list of concepts rather than a developing line of reasoning. The provided solar-energy example shows how lines such as “It helps power a better future,” “No emissions, no fuel, just light,” and “A train in Germany now runs entirely on solar energy” can be reordered while preserving the message.

What is the “narrative hook” pattern, and what phrases typically signal it?

A narrative hook is described as a structured sequence: a dramatic setup, an unexpected twist, stakes that heighten attention, and a final reflection. It often begins with a quote or reaction that feels sensational, then pivots to a shocking turn. The transcript cites common engagement phrasing such as “No one expected what happened next,” “You want to know what happened next, but then she did something no one saw coming,” and “The whole internet was watching. That’s when everything changed.”

Why does capitalization show up as a detection clue?

The transcript argues that AI writing often uses capitalization to mark drama and importance, treating uppercase as emphasis. Instead of subtle, context-driven emphasis, it highlights “dramatic concepts” in ways that feel promotional. Examples mentioned include terms like “the witness,” “corporate memory,” and other big-idea phrases used in the screenshots.

What are “minimalist triplet phrasing” and “reversal structures,” and why are they high-confidence markers?

Minimalist triplet phrasing is a rhythmic listing of three short elements, sometimes separated by full stops, used to intensify clarity or emotional impact. The transcript notes that AI often lists in threes even in academic-like contexts where humans might vary cadence. Reversal structures flip from what something isn’t to what it is—often without the nuance humans typically use—using constructions like “They don’t just carry blood; they become part of you.” The transcript claims that when these appear, confidence in AI authorship rises sharply (around 90%).

How do these patterns tend to appear together in real posts?

The transcript stresses that AI-written posts rarely rely on just one tell. Instead, multiple structures overlap in the same text—triplets plus reversals, hooks plus suspense, and modular paragraph blocks that can be shuffled. This stacking effect is why the detection becomes easier once someone recognizes the cluster rather than hunting for a single punctuation feature.

Review Questions

  1. Which specific features distinguish “modular sentences” from normal paragraph flow, and how would you test a paragraph for modularity?
  2. Give one example of a “narrative hook” sequence (setup → twist → stakes → reflection) and identify the kinds of phrases that trigger it.
  3. Why might “minimalist triplet phrasing” and “reversal structures” be more reliable than single stylistic markers like punctuation?

Key Points

  1. 1

    Treat “stock language” as a first-pass signal: generic, template-like phrasing that repeats across unrelated topics is a common AI tell.

  2. 2

    Look for “modular sentences” where sentence or paragraph blocks can be rearranged without changing meaning, indicating weak narrative flow.

  3. 3

    Check for clickbait-style “narrative hooks” that follow a hook→twist→stakes→reflection rhythm rather than building ideas through transitions.

  4. 4

    Watch for “suspense building” and engagement phrases that heighten drama using familiar, high-attention templates.

  5. 5

    Use capitalization of “dramatic concepts” as a supporting clue, especially when emphasis feels promotional rather than analytical.

  6. 6

    Prioritize two high-confidence markers: “minimalist triplet phrasing” (rhythmic lists of three) and “reversal structures” (flipping from what something isn’t to what it is).

  7. 7

    Expect multiple tells to stack in the same text; detection improves when you identify clusters, not one isolated feature.

Highlights

AI detection in the transcript hinges on clusters of repeatable structures—especially stock language, modular sentence blocks, and dramatic template rhythms.
“Modular sentences” are described as rearrangeable puzzle pieces: sentences rarely depend on earlier ones, so arguments feel disjointed.
Two standout markers are “minimalist triplet phrasing” and “reversal structures,” which frequently appear together and raise confidence sharply.
Capitalization is treated as an emphasis mechanism AI often overuses to signal drama rather than nuance.

Topics

  • AI Writing Detection
  • Stock Language
  • Modular Sentences
  • Narrative Hooks
  • Reversal Structures

Mentioned