Get AI summaries of any video or article — Sign up free
The Internet Will End Soon… thumbnail

The Internet Will End Soon…

Pursuit of Wonder·
6 min read

Based on Pursuit of Wonder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

“Spam-like media” is defined as unwanted or low-value communication, including content that games algorithms through automation and excessive posting for engagement.

Briefing

A growing mix of fake traffic, algorithm-driven feeds, and “spam-like” content is reshaping the internet into something closer to a Monty Python café where every dish is Spam—only now the menu is personalized, automated, and optimized for engagement. The core warning is that today’s online environment increasingly rewards repetition, low-friction consumption, and content engineered to satisfy ranking systems, not human curiosity. That shift matters because it affects what people see, what creators make, and whether online spaces still support genuine connection rather than noise.

The discussion traces the word “spam” from comedy to internet vernacular. A Monty Python sketch from 1970 (“Spam”) features Vikings chanting “Spam” until conversation collapses, and it later becomes a shorthand for excessive, irrelevant, unwanted, insincere, or repeated communication. In the 1980s, early online communities used “Spam” and related quotes as a kind of gatekeeping tactic—typing it to knock other unwanted users or competitors off-screen. Around the same era, “spam” also came to mean repeated posting on file- and message-sharing networks.

From there, the argument widens into a systems problem. Internet growth dramatically expanded access and creativity—from tens of thousands of hosts in the early 1980s to hundreds of thousands by the late 1980s, the creation of the Worldwide Web in 1989, and a rapid rise in websites and household computer ownership by the mid-1990s. By 2024, smartphone ownership is estimated at about 70% of the global population, with billions of websites and an average daily online time of roughly 6 hours and 35 minutes. With that scale came new distribution channels for harmful and low-quality content.

“Spam-like media” is defined broadly: unwanted emails, but also digital content that exploits platform algorithms, automation, and excessive posting—often aimed at views and engagement rather than value. The claim is that spam isn’t a side issue anymore; it’s becoming the default flavor of the feed.

A key supporting thread is the “dead internet theory,” which suggests most online activity is bot-driven or AI-generated and curated by algorithms, implying the internet “died” around 2016. The transcript tempers the conspiracy angle with data: Imperva reported in 2016 that bots accounted for over half of online activity, with about 30% of visits attributed to “bad bots” and roughly 20% to “good bots.” Even if personal feeds are curated, fake traffic and spam-like content are portrayed as unavoidable at scale.

The most concrete mechanism comes from Jack Conte’s keynote at South by Southwest (“Death of the Follower and the Future of Creativity on the Web”). Conte argues that major platforms shifted from “follow”-based distribution to algorithmic ranking based on engagement metrics like watch time. That change, he says, breaks the creator-audience connection: creators must compete for algorithm favor rather than make what they want for a clear community. Since platforms primarily monetize through advertising, the endgame becomes profitability, and the content that thrives is often repetitive, copycat, or optimized for attention.

Generative AI accelerates the problem by enabling automated messaging, content creation, and even commerce actions with less human involvement. The forecast is not that AI is inherently bad, but that without careful regulation and better technical safeguards, digital diets will become harder to navigate—more disconnected, more synthetic, and more spam-saturated. The closing emphasis is practical: creators and consumers still have agency, and the goal is sustaining genuine human creativity and connection in whatever “new internet” emerges.

Cornell Notes

Spam began as a Monty Python joke and became internet shorthand for excessive, irrelevant, unwanted, insincere, or repeated communication. As the internet scaled—from early host growth and the Web’s creation to today’s smartphone-dominated usage—spam-like media expanded from nuisance to a structural feature of online feeds. The transcript links this to two forces: fake traffic (bots and AI) and algorithmic distribution that ranks content by engagement rather than by human follow relationships. Jack Conte argues that when platforms prioritize watch time and ad revenue, creators increasingly tailor output to ranking systems, producing repetition and “dumbing down.” Generative AI may intensify these trends, making healthier digital diets and better governance more urgent.

How did “Spam” move from comedy to a core internet concept?

A Monty Python sketch (“Spam,” aired in 1970) depicts a café where nearly every dish contains Spam, and Vikings chant “Spam” until normal conversation collapses. The transcript connects that cultural moment to the later use of “spam” online: in the 1980s, early chatrooms and message boards used “Spam” (and sketch quotes) as a way to disrupt unwanted users or competitors. Around the same period, “spam” also became the term for repeated, excessive posting on file- and message-sharing networks.

What does “spam-like media” mean in this argument?

It includes more than unwanted emails. It also covers digital content that exploits platform algorithms, automation, and/or excessive posting—often to generate views and engagement rather than provide value. The emphasis is that spam-like content can be engineered to game ranking systems, not just to annoy readers.

What evidence is offered for the scale of fake traffic online?

The transcript references the “dead internet theory” but treats it as partly questionable. It then cites Imperva’s findings (reported in 2016): bots accounted for over half of all online activity; about 30% of website visits were likely from “bad bots” designed for harmful actions like theft and hacking; and roughly 20% were “good bots” that help monitor and maintain parts of the internet. The argument is that these proportions likely increased since then, even if any individual’s feed may be curated.

Why does algorithmic ranking change creator behavior, according to Jack Conte?

Conte argues that platforms like YouTube, Facebook, Instagram, and TikTok shifted from “follow”-based distribution to algorithmic ranking based on engagement metrics such as watch time. With follow less central, creators can’t rely on a stable audience seeing their posts. Instead, they must produce content that satisfies ranking criteria they don’t control—shifting creative decisions from “what lights me up” to “what the algorithm will favor,” which can reduce creative freedom and encourage repetitive, optimized output.

How does generative AI intensify the spam and disconnection problem?

Generative AI can automate more of what used to require humans: messaging in chats, generating digital art and video, managing profiles and stores, and even executing purchases and economic decisions. The transcript’s concern is that humans become less involved, while synthetic content and automated engagement grow—making it harder to distinguish genuine interaction from algorithmic or AI-driven activity.

What solutions are proposed, and what limits are acknowledged?

The transcript calls for reasonable regulations around AI and internet media and for new technologies that mitigate existing and upcoming problems. It also notes a historical lag: safeguards designed to restrain bad actors often arrive after the harmful techniques. That leaves responsibility with creators and consumers, plus platform users and founders, to shape what gets rewarded and what gets ignored.

Review Questions

  1. What mechanisms connect ad-driven algorithmic feeds to the rise of spam-like content and repetitive creator output?
  2. How do bots and AI-generated activity affect the reliability of online engagement, even when a user’s personal feed seems curated?
  3. Why does the transcript treat “follow” architecture as important for creativity and organization, and what happens when it weakens?

Key Points

  1. 1

    “Spam-like media” is defined as unwanted or low-value communication, including content that games algorithms through automation and excessive posting for engagement.

  2. 2

    The term “spam” traces back to a Monty Python sketch, then became a practical label for disruptive posting and unwanted messages in early online communities.

  3. 3

    Internet scale and smartphone ubiquity increased both opportunity and exposure, making spam and harmful content harder to avoid.

  4. 4

    Fake traffic is supported by bot-heavy measurements; Imperva reported in 2016 that bots drove over half of online activity, with a large share tied to harmful “bad bots.”

  5. 5

    Algorithmic ranking based on engagement metrics can weaken the creator-audience “follow” relationship and push creators toward content optimized for ranking rather than intent.

  6. 6

    Generative AI can automate messaging, content creation, and even commerce actions, likely accelerating synthetic engagement and spam-like dynamics.

  7. 7

    Regulation and technical safeguards are needed, but they often lag behind new abuse methods—so creators and consumers retain meaningful responsibility.

Highlights

Spam started as a comedy gag about a café where every dish is Spam, then became a durable term for unwanted, repeated, or irrelevant online communication.
Imperva’s 2016 measurements found bots behind over half of online activity, with substantial portions attributed to harmful “bad bots.”
Jack Conte argues that algorithmic feeds break the follow-based architecture that supports human creativity and direct creator-audience connection.
Generative AI lowers the cost of producing synthetic content and automated interactions, raising the odds of disconnection and spam-like media.
The proposed path forward combines regulation, better safeguards, and user/creator agency to keep digital spaces human-centered.

Topics

  • Spam Origins
  • Algorithmic Feeds
  • Dead Internet Theory
  • Bot Traffic
  • Generative AI

Mentioned