Gen AI gone wild... how artificial intelligence keeps failing us
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Stability AI’s open-model reputation doesn’t eliminate the risk that frontier-model training requires massive cash burn without a quick revenue engine.
Briefing
The most urgent theme running through these examples is that today’s “AI progress” often fails in ways that are either unsafe, financially unsustainable, or built on questionable incentives—raising the question of whether large language models are delivering real intelligence or just marketing momentum.
Stability AI sits near the top of that concern. Despite building widely used open image models such as Stable Diffusion, the company has struggled to raise additional funding at a reported $4 billion valuation. Its leadership has also signaled a shift in direction, with the founder and CEO planning to step down and arguing that centralized AI won’t be beaten by more centralized AI. The underlying problem is straightforward: training large foundation models requires massive cash burn, and unlike companies with steady revenue streams, Stability AI may not be able to fund the next leap without a rapid path to higher revenue. The stakes are existential—if it can’t monetize or secure funding quickly, the open-model ecosystem it helped popularize could stall.
The failures then move from business risk to everyday harm. A personal anecdote about Google’s Gemini illustrates how AI can produce dangerously wrong advice: the model suggested adding “non-toxic glue” to pizza sauce to increase tackiness, a recommendation the narrator treats as a health red flag. The same segment warns against asking AI about depression or homicidal feelings, implying that the system’s responses can be unreliable when emotional or high-risk topics are involved.
Privacy and consent issues also surface sharply in the discussion of Meta. Training competitive models requires large datasets, and Meta’s approach is framed as leveraging Facebook and Instagram user data by default for AI purposes. Opting out is described as intentionally cumbersome—requiring a form with a written explanation and a one-time password—creating friction that, in the narrator’s view, makes it harder for users to meaningfully refuse.
Two consumer products—Humane pin and Rabbit R1—are treated as emblematic of “AI hardware” hype that doesn’t justify its cost. Humane pin, backed by years of development and reported $230 million in stealth funding, is portrayed as struggling to find buyers, while Rabbit R1 is criticized as redundant because similar functionality can be done with a phone app.
Finally, the segment points to OpenAI’s GPT-5 announcement as a credibility problem. Training a “New Frontier Model” is framed as disappointing because it suggests AGI hasn’t been achieved internally as some observers expected after leadership turmoil. The discussion ties that to claims that Sam Altman misled the board and now leads a new 9-person Safety Committee—raising suspicion that “AI safety” rhetoric may be used to sustain hype, delay accountability, and keep regulatory pressure favorable.
Across all five cases, the throughline is skepticism: large language models may be useful tools, but the broader ecosystem—funding, safety, privacy, and consumer value—often looks misaligned with the promise of genuine intelligence.
Cornell Notes
The transcript argues that recent AI developments repeatedly fail on practical grounds: unsafe or nonsensical outputs, privacy-by-default data practices, and products that don’t deliver unique value. Stability AI is presented as financially fragile despite building major open models like Stable Diffusion, because training frontier models demands huge cash burn. Google’s Gemini is used as an example of AI giving harmful-sounding advice, while Meta is criticized for making AI data collection opt-out difficult. Consumer “AI devices” like Humane pin and Rabbit R1 are portrayed as redundant or overpriced. The segment ends by questioning OpenAI’s GPT-5 messaging and whether AGI timelines and “safety” narratives are being used to maintain hype and influence regulation.
Why is Stability AI portrayed as a high-risk player despite its open-model success?
What example is used to illustrate that AI can give dangerous or absurd advice?
How does the transcript characterize Meta’s approach to user data for AI training?
What’s the critique of Humane pin and Rabbit R1?
Why does the transcript question OpenAI’s GPT-5 announcement and AGI progress?
Review Questions
- Which specific financial constraint is highlighted as the biggest threat to Stability AI’s survival?
- What mechanisms does the transcript claim make Meta’s opt-out process harder than it should be?
- How does the transcript connect leadership changes at OpenAI to skepticism about AGI timelines and safety narratives?
Key Points
- 1
Stability AI’s open-model reputation doesn’t eliminate the risk that frontier-model training requires massive cash burn without a quick revenue engine.
- 2
OpenAI’s GPT-5 messaging is treated as a credibility signal that AGI progress may be slower than public timelines implied.
- 3
Google’s Gemini is used as an example of AI producing recommendations that could be unsafe or nonsensical in real-world contexts.
- 4
Meta’s data-for-AI approach is criticized as opt-out-by-friction, relying on default collection plus a multi-step opt-out process.
- 5
Humane pin and Rabbit R1 are framed as cases where “AI hardware” fails to justify cost or uniqueness compared with existing phone-based alternatives.
- 6
The transcript links AI safety rhetoric to incentives that may sustain hype and influence regulation, including concerns about regulatory capture.