The first casualties of AI
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Chegg’s decline is tied to high student adoption of ChatGPT, undermining demand for traditional tutoring and homework-help services.
Briefing
AI’s first casualties are already showing up across education, media, and legal services—while the biggest long-term threat may be to search-driven advertising and the jobs tied to it. Chegg’s business took a sharp hit after it acknowledged that ChatGPT is eroding demand for tutoring and homework help, with the transcript citing that roughly 89% of students are using “Chachi PT” (ChatGPT). Chegg’s response—building “chegmate” using the OpenAI API—signals a shift from resisting AI to trying to monetize it, even as the platform’s core value proposition (helping students cheat) gets undercut by AI that can generate answers on demand.
The ripple effects extend beyond homework. BuzzFeed and Vice Media are described as going out of business in part because large language models can now produce fast, convincing content at scale, including sensational listicles. Legal services face a more direct disruption: the transcript claims that once a ChatGPT retrieval plugin goes live, companies could connect their own data and generate legal documents “on the Fly,” making paid template-filling services like LegalZoom less attractive. That frames a broader pattern: when AI can draft, summarize, and personalize outputs using user-provided context, entire categories of “human-in-the-loop” services become optional.
The transcript then pivots to the corporate stakes for the companies most exposed to AI-driven behavior change. Alphabet/Google is flagged as particularly vulnerable if ChatGPT continues growing at the pace described as “the fastest growing app of all time,” because Google search traffic—and therefore search ads—could decline. Search ads are presented as a major revenue engine, accounting for more than half of Alphabet’s revenue. Google’s counter-move is implied to be imminent, with Google I/O “exactly one week away,” suggesting new product announcements could aim to defend search and ad dominance.
Risk and governance enter the story through two high-profile signals. Deep learning pioneer Geoffrey Hinton quits, and the transcript ties his resignation to a desire to speak freely about AI dangers—specifically concerns that political actors like Vladimir Putin and Ron DeSantis could use AI for election influence or even for autonomous “killing machines.” The transcript also notes Russia’s new model, “gigachat,” and expresses skepticism that it will pause research even if the system proves biased.
Finally, the labor market and military applications are treated as the “hard edges” of AI adoption. Palantir is cited for demonstrating AI-assisted targeting workflows that generate multiple courses of action for commanders. IBM is described as cutting 7,800 jobs over five years, mostly in back-office functions like HR—reinforcing the idea that AI replacement often starts with paperwork and support roles. Goldman Sachs is quoted predicting up to 300 million jobs could be lost, though the transcript adds a counterweight: programming work is said to be relatively safer for now, and AI is expected to create new roles, including ones that didn’t exist in 1940. The closing note argues that much of the “emergent” AI fear may be overstated, while hype and regulation could be benefiting insiders until everyday users can train capable models themselves.
Cornell Notes
The transcript portrays AI’s “first casualties” as real businesses losing customers and relevance—especially in tutoring, media, and legal templates—because large language models can generate answers and documents quickly and cheaply. Chegg’s decline is linked to student adoption of ChatGPT, and Chegg’s pivot to an OpenAI API-based product (“chegmate”) illustrates how quickly resistance turns into adaptation. Alphabet/Google is framed as the biggest strategic risk if AI reduces reliance on traditional search, threatening search-ad revenue. At the same time, AI’s dangers are highlighted through Geoffrey Hinton’s resignation and concerns about political misuse and autonomous weapons. Job displacement is emphasized via IBM layoffs and Goldman Sachs’ estimate of large-scale employment losses, tempered by claims that new occupations will emerge.
Why does Chegg’s situation matter beyond one company?
What categories of services are most exposed to AI-generated text?
How could AI disrupt Google’s core business model?
What does Geoffrey Hinton’s resignation add to the story?
Where does the transcript place AI’s near-term impact on jobs?
How does the transcript connect AI to military and targeting systems?
Review Questions
- Which business models in education, media, and legal services become less valuable when AI can generate tailored text instantly?
- What revenue risk does the transcript associate with Alphabet/Google, and why would it follow from changes in user behavior?
- How does the transcript reconcile job losses (IBM, Goldman Sachs) with the claim that new jobs will be created?
Key Points
- 1
Chegg’s decline is tied to high student adoption of ChatGPT, undermining demand for traditional tutoring and homework-help services.
- 2
Chegg’s “chegmate” plan using the OpenAI API signals a rapid industry pivot from resisting AI to building AI-assisted products.
- 3
Media outlets and legal-template businesses face direct pressure because large language models can produce publishable content and draft documents quickly.
- 4
Alphabet/Google is portrayed as especially exposed if AI reduces reliance on Google Search, threatening search-ad revenue that makes up more than half of its revenue.
- 5
Geoffrey Hinton’s resignation is used as a credibility signal for AI risk concerns, including political misuse and autonomous weapon scenarios.
- 6
AI adoption is linked to job displacement starting with back-office functions, illustrated by IBM’s planned 7,800 job cuts.
- 7
The transcript balances displacement claims with predictions of new job creation, while also questioning whether some AI fears are exaggerated.