Get AI summaries of any video or article — Sign up free
AI Backlash is getting real! thumbnail

AI Backlash is getting real!

Sabine Hossenfelder·
5 min read

Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI resistance is expanding beyond job fears into a broader rejection of AI deployments that users can’t easily opt out of.

Briefing

AI resistance is no longer limited to complaints about “slop” or job anxiety; it’s hardening into a broader rejection of how artificial intelligence is being deployed—especially when companies roll it out by default rather than offering users real choice. The backlash shows up across platforms and industries, from social media to search, from creative work to infrastructure.

On social media, Meta’s moves triggered immediate pushback. In January, the company removed dozens of AI-generated Instagram and Facebook profiles—fake personas such as “Liv” and “Carter the dating coach”—after real users reportedly disliked them. In March, Meta introduced AI search features, but many users tried to suppress the “Meta AI” option using browser extensions, while Reddit discussions amplified the resistance. A similar pattern played out earlier with Google’s AI search, which later shifted into a separate tab—an implicit concession that users wanted control over whether AI appears in their experience.

The friction isn’t confined to consumer-facing features. In software communities, some groups have moved from criticism to policy. Open-source coding ecosystems including QEMU and libvirt have banned AI-generated contributions, and Cloud Hypervisor has adopted rules forbidding code generated by large language models—forcing contributors to rely on older methods such as searching Stack Overflow for solutions.

Even when AI output is good, people still resist. Digital art and other creative uses have sparked “No-AI” requirements at events like FanX and GalaxyCon, and some music venues have banned AI-generated artwork even for flyers. Contracts are tightening too: the transcript describes signing an agreement requiring confirmation that an essay wasn’t written with AI, with a wry expectation that future demands could escalate into proof of “suffering.” Financial pressure is part of the concern. The International Confederation of Societies of Authors and Composers estimates that over the next five years, creators in music and audio/video content could lose roughly 21% to 24% of revenue due to Generative AI’s growing market share—an estimate the speaker suspects may be low as creators adapt by seeking other income streams, potentially further increasing GenAI’s share.

Resistance also extends to physical infrastructure. In the United States, multiple towns and counties across states including Michigan, Illinois, and New York have refused AI data centers, and similar opposition is reported in Spain, the Netherlands, Ireland, and France. Land and water use sit at the center of those objections.

Yet the story has a twist: many people don’t uniformly reject AI. In health-related contexts, users reportedly prefer AI chatbots for discussing embarrassing topics, while they prefer shouting at humans when angry. Retail surveys suggest consumers trust AI more than humans for outfit selection. A healthcare study cited in the transcript found patients rated AI-written physician replies as more empathetic than human-written ones—suggesting that “the robots just care more.”

The emerging pattern points to a likely split as AI grows more capable: one camp embraces it, while another builds “human-only” products and spaces. The common thread across the backlash is less about whether AI works and more about whether people can opt out when it’s pushed into their lives by default—an issue that may become even more divisive as systems approach human-level intelligence.

Cornell Notes

The transcript argues that AI backlash is expanding beyond “slop” and job fears into a wider rejection of how AI is being rolled out—often without meaningful user choice. Examples include Meta removing AI-generated social profiles, users trying to hide “Meta AI,” and open-source projects like QEMU and libvirt banning AI-generated code contributions. Creative industries are also responding with “No-AI” event rules and contracts requiring proof that AI wasn’t used, alongside estimates that Generative AI could cut creator revenue by roughly 21–24% over five years. Still, the resistance isn’t universal: people sometimes prefer AI for health conversations, retail outfit advice, and even perceived empathy in physician replies. The likely future is two camps—AI adopters and “human-only” markets—driven by the desire for control and trust.

Why did Meta’s AI-related changes trigger backlash, and what does that reveal about user preferences?

Meta’s January removal of dozens of AI-generated Instagram/Facebook profiles (fake personas like “Liv” and “Carter the dating coach”) followed reports that real users seriously disliked them. In March, when Meta rolled out AI search features, many users didn’t want the AI option in their experience; some used browser extensions to hide “Meta AI,” and Reddit discussions amplified the pushback. The broader takeaway is that resistance often targets forced integration—people want AI as an option, not an automatic default.

How does the resistance show up in software and open-source communities?

Some coding communities have moved from criticism to explicit contribution rules. QEMU and libvirt have banned AI-generated contributions, and Cloud Hypervisor has adopted policies that forbid contributions generated by large language models. The practical effect is that contributors must use “old-fashioned” workflows—such as searching Stack Overflow answers—rather than submitting AI-produced code.

What role does AI quality play in the backlash, especially in art and creative work?

The transcript argues that people resist even when AI works well. Digital art and other outputs can still trigger “No-AI” requirements at events like FanX or GalaxyCon, and some music venues ban AI-generated artwork even for flyers. Contracts can also require creators to confirm they didn’t use AI to write essays, reflecting a growing norm that authorship and process matter as much as final output.

What economic and legal pressures are cited as drivers of creator resistance?

The International Confederation of Societies of Authors and Composers estimates that within five years, creators of music and creators of audio/video content could lose about 21% to 24% of revenue due to Generative AI’s increasing market share. The transcript suggests this may be an underestimate because creators may shift to other income strategies, potentially accelerating GenAI’s share further. The described contracts requiring confirmation of non-AI authorship show how legal mechanisms are tightening alongside economic fears.

Why do some people prefer AI in certain situations, even while others reject it?

The transcript highlights a context-dependent split. For health-related topics that feel embarrassing, people reportedly prefer AI chatbots. When angry, they prefer confronting a human. In retail, surveys suggest consumers trust AI more than humans to pick good outfits. A healthcare study cited in the transcript found patients rated AI-written physician replies as more empathetic than human-written ones—framing AI as “caring more” in ways that matter to users.

What does the transcript predict about the future of AI adoption?

As AI approaches human-level intelligence, the transcript predicts two extreme camps: tech enthusiasts who embrace it and a rejecting camp that builds businesses around “human only” spaces and products. The underlying driver is divisiveness—especially around whether AI is optional and trustworthy—rather than a simple belief that AI is always bad.

Review Questions

  1. Which examples suggest the backlash is mainly about lack of user control rather than AI performance?
  2. How do bans on AI-generated code contributions differ from “No-AI” rules in creative events?
  3. What evidence in the transcript supports the claim that people sometimes prefer AI over humans, and in what contexts?

Key Points

  1. 1

    AI resistance is expanding beyond job fears into a broader rejection of AI deployments that users can’t easily opt out of.

  2. 2

    Meta’s removal of AI-generated social profiles and the subsequent user attempts to hide “Meta AI” illustrate a demand for choice.

  3. 3

    Some open-source projects (QEMU, libvirt) and Cloud Hypervisor have moved to bans on AI-generated contributions, pushing developers toward traditional workflows.

  4. 4

    Creative industries are responding with “No-AI” event requirements and contracts requiring creators to confirm they didn’t use AI.

  5. 5

    Generative AI’s growing market share is linked to projected revenue losses for creators, with estimates around 21–24% over five years.

  6. 6

    Opposition to AI isn’t only digital: multiple regions in Europe and parts of the U.S. have refused AI data centers, often citing land and water concerns.

  7. 7

    Public attitudes toward AI are mixed: people may prefer AI for health-related conversations, retail outfit advice, and perceived empathy in physician replies.

Highlights

Meta removed dozens of AI-generated Instagram and Facebook profiles after real users reportedly hated them—an early sign that fake personas can backfire.
Open-source communities like QEMU and libvirt have banned AI-generated contributions, while Cloud Hypervisor forbids large-model-generated code submissions.
Even strong AI output doesn’t guarantee acceptance: “No-AI” rules at events like FanX and GalaxyCon and bans on AI artwork for flyers show process and authorship concerns.
A healthcare study cited in the transcript found patients rated AI-written physician replies as more empathetic than human-written ones, complicating the idea that people simply reject AI.
The transcript predicts a two-camp future: AI adopters versus “human-only” businesses and spaces as systems approach human-level intelligence.

Topics

  • AI Backlash
  • User Choice
  • Open-Source Policies
  • Generative AI Revenue
  • AI Data Centers

Mentioned