AI Backlash is getting real!
Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI resistance is expanding beyond job fears into a broader rejection of AI deployments that users can’t easily opt out of.
Briefing
AI resistance is no longer limited to complaints about “slop” or job anxiety; it’s hardening into a broader rejection of how artificial intelligence is being deployed—especially when companies roll it out by default rather than offering users real choice. The backlash shows up across platforms and industries, from social media to search, from creative work to infrastructure.
On social media, Meta’s moves triggered immediate pushback. In January, the company removed dozens of AI-generated Instagram and Facebook profiles—fake personas such as “Liv” and “Carter the dating coach”—after real users reportedly disliked them. In March, Meta introduced AI search features, but many users tried to suppress the “Meta AI” option using browser extensions, while Reddit discussions amplified the resistance. A similar pattern played out earlier with Google’s AI search, which later shifted into a separate tab—an implicit concession that users wanted control over whether AI appears in their experience.
The friction isn’t confined to consumer-facing features. In software communities, some groups have moved from criticism to policy. Open-source coding ecosystems including QEMU and libvirt have banned AI-generated contributions, and Cloud Hypervisor has adopted rules forbidding code generated by large language models—forcing contributors to rely on older methods such as searching Stack Overflow for solutions.
Even when AI output is good, people still resist. Digital art and other creative uses have sparked “No-AI” requirements at events like FanX and GalaxyCon, and some music venues have banned AI-generated artwork even for flyers. Contracts are tightening too: the transcript describes signing an agreement requiring confirmation that an essay wasn’t written with AI, with a wry expectation that future demands could escalate into proof of “suffering.” Financial pressure is part of the concern. The International Confederation of Societies of Authors and Composers estimates that over the next five years, creators in music and audio/video content could lose roughly 21% to 24% of revenue due to Generative AI’s growing market share—an estimate the speaker suspects may be low as creators adapt by seeking other income streams, potentially further increasing GenAI’s share.
Resistance also extends to physical infrastructure. In the United States, multiple towns and counties across states including Michigan, Illinois, and New York have refused AI data centers, and similar opposition is reported in Spain, the Netherlands, Ireland, and France. Land and water use sit at the center of those objections.
Yet the story has a twist: many people don’t uniformly reject AI. In health-related contexts, users reportedly prefer AI chatbots for discussing embarrassing topics, while they prefer shouting at humans when angry. Retail surveys suggest consumers trust AI more than humans for outfit selection. A healthcare study cited in the transcript found patients rated AI-written physician replies as more empathetic than human-written ones—suggesting that “the robots just care more.”
The emerging pattern points to a likely split as AI grows more capable: one camp embraces it, while another builds “human-only” products and spaces. The common thread across the backlash is less about whether AI works and more about whether people can opt out when it’s pushed into their lives by default—an issue that may become even more divisive as systems approach human-level intelligence.
Cornell Notes
The transcript argues that AI backlash is expanding beyond “slop” and job fears into a wider rejection of how AI is being rolled out—often without meaningful user choice. Examples include Meta removing AI-generated social profiles, users trying to hide “Meta AI,” and open-source projects like QEMU and libvirt banning AI-generated code contributions. Creative industries are also responding with “No-AI” event rules and contracts requiring proof that AI wasn’t used, alongside estimates that Generative AI could cut creator revenue by roughly 21–24% over five years. Still, the resistance isn’t universal: people sometimes prefer AI for health conversations, retail outfit advice, and even perceived empathy in physician replies. The likely future is two camps—AI adopters and “human-only” markets—driven by the desire for control and trust.
Why did Meta’s AI-related changes trigger backlash, and what does that reveal about user preferences?
How does the resistance show up in software and open-source communities?
What role does AI quality play in the backlash, especially in art and creative work?
What economic and legal pressures are cited as drivers of creator resistance?
Why do some people prefer AI in certain situations, even while others reject it?
What does the transcript predict about the future of AI adoption?
Review Questions
- Which examples suggest the backlash is mainly about lack of user control rather than AI performance?
- How do bans on AI-generated code contributions differ from “No-AI” rules in creative events?
- What evidence in the transcript supports the claim that people sometimes prefer AI over humans, and in what contexts?
Key Points
- 1
AI resistance is expanding beyond job fears into a broader rejection of AI deployments that users can’t easily opt out of.
- 2
Meta’s removal of AI-generated social profiles and the subsequent user attempts to hide “Meta AI” illustrate a demand for choice.
- 3
Some open-source projects (QEMU, libvirt) and Cloud Hypervisor have moved to bans on AI-generated contributions, pushing developers toward traditional workflows.
- 4
Creative industries are responding with “No-AI” event requirements and contracts requiring creators to confirm they didn’t use AI.
- 5
Generative AI’s growing market share is linked to projected revenue losses for creators, with estimates around 21–24% over five years.
- 6
Opposition to AI isn’t only digital: multiple regions in Europe and parts of the U.S. have refused AI data centers, often citing land and water concerns.
- 7
Public attitudes toward AI are mixed: people may prefer AI for health-related conversations, retail outfit advice, and perceived empathy in physician replies.