The Internet Will End Soon…
Based on Pursuit of Wonder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
“Spam-like media” is defined as unwanted or low-value communication, including content that games algorithms through automation and excessive posting for engagement.
Briefing
A growing mix of fake traffic, algorithm-driven feeds, and “spam-like” content is reshaping the internet into something closer to a Monty Python café where every dish is Spam—only now the menu is personalized, automated, and optimized for engagement. The core warning is that today’s online environment increasingly rewards repetition, low-friction consumption, and content engineered to satisfy ranking systems, not human curiosity. That shift matters because it affects what people see, what creators make, and whether online spaces still support genuine connection rather than noise.
The discussion traces the word “spam” from comedy to internet vernacular. A Monty Python sketch from 1970 (“Spam”) features Vikings chanting “Spam” until conversation collapses, and it later becomes a shorthand for excessive, irrelevant, unwanted, insincere, or repeated communication. In the 1980s, early online communities used “Spam” and related quotes as a kind of gatekeeping tactic—typing it to knock other unwanted users or competitors off-screen. Around the same era, “spam” also came to mean repeated posting on file- and message-sharing networks.
From there, the argument widens into a systems problem. Internet growth dramatically expanded access and creativity—from tens of thousands of hosts in the early 1980s to hundreds of thousands by the late 1980s, the creation of the Worldwide Web in 1989, and a rapid rise in websites and household computer ownership by the mid-1990s. By 2024, smartphone ownership is estimated at about 70% of the global population, with billions of websites and an average daily online time of roughly 6 hours and 35 minutes. With that scale came new distribution channels for harmful and low-quality content.
“Spam-like media” is defined broadly: unwanted emails, but also digital content that exploits platform algorithms, automation, and excessive posting—often aimed at views and engagement rather than value. The claim is that spam isn’t a side issue anymore; it’s becoming the default flavor of the feed.
A key supporting thread is the “dead internet theory,” which suggests most online activity is bot-driven or AI-generated and curated by algorithms, implying the internet “died” around 2016. The transcript tempers the conspiracy angle with data: Imperva reported in 2016 that bots accounted for over half of online activity, with about 30% of visits attributed to “bad bots” and roughly 20% to “good bots.” Even if personal feeds are curated, fake traffic and spam-like content are portrayed as unavoidable at scale.
The most concrete mechanism comes from Jack Conte’s keynote at South by Southwest (“Death of the Follower and the Future of Creativity on the Web”). Conte argues that major platforms shifted from “follow”-based distribution to algorithmic ranking based on engagement metrics like watch time. That change, he says, breaks the creator-audience connection: creators must compete for algorithm favor rather than make what they want for a clear community. Since platforms primarily monetize through advertising, the endgame becomes profitability, and the content that thrives is often repetitive, copycat, or optimized for attention.
Generative AI accelerates the problem by enabling automated messaging, content creation, and even commerce actions with less human involvement. The forecast is not that AI is inherently bad, but that without careful regulation and better technical safeguards, digital diets will become harder to navigate—more disconnected, more synthetic, and more spam-saturated. The closing emphasis is practical: creators and consumers still have agency, and the goal is sustaining genuine human creativity and connection in whatever “new internet” emerges.
Cornell Notes
Spam began as a Monty Python joke and became internet shorthand for excessive, irrelevant, unwanted, insincere, or repeated communication. As the internet scaled—from early host growth and the Web’s creation to today’s smartphone-dominated usage—spam-like media expanded from nuisance to a structural feature of online feeds. The transcript links this to two forces: fake traffic (bots and AI) and algorithmic distribution that ranks content by engagement rather than by human follow relationships. Jack Conte argues that when platforms prioritize watch time and ad revenue, creators increasingly tailor output to ranking systems, producing repetition and “dumbing down.” Generative AI may intensify these trends, making healthier digital diets and better governance more urgent.
How did “Spam” move from comedy to a core internet concept?
What does “spam-like media” mean in this argument?
What evidence is offered for the scale of fake traffic online?
Why does algorithmic ranking change creator behavior, according to Jack Conte?
How does generative AI intensify the spam and disconnection problem?
What solutions are proposed, and what limits are acknowledged?
Review Questions
- What mechanisms connect ad-driven algorithmic feeds to the rise of spam-like content and repetitive creator output?
- How do bots and AI-generated activity affect the reliability of online engagement, even when a user’s personal feed seems curated?
- Why does the transcript treat “follow” architecture as important for creativity and organization, and what happens when it weakens?
Key Points
- 1
“Spam-like media” is defined as unwanted or low-value communication, including content that games algorithms through automation and excessive posting for engagement.
- 2
The term “spam” traces back to a Monty Python sketch, then became a practical label for disruptive posting and unwanted messages in early online communities.
- 3
Internet scale and smartphone ubiquity increased both opportunity and exposure, making spam and harmful content harder to avoid.
- 4
Fake traffic is supported by bot-heavy measurements; Imperva reported in 2016 that bots drove over half of online activity, with a large share tied to harmful “bad bots.”
- 5
Algorithmic ranking based on engagement metrics can weaken the creator-audience “follow” relationship and push creators toward content optimized for ranking rather than intent.
- 6
Generative AI can automate messaging, content creation, and even commerce actions, likely accelerating synthetic engagement and spam-like dynamics.
- 7
Regulation and technical safeguards are needed, but they often lag behind new abuse methods—so creators and consumers retain meaningful responsibility.