Get AI summaries of any video or article — Sign up free
Gen AI gone wild... how artificial intelligence keeps failing us thumbnail

Gen AI gone wild... how artificial intelligence keeps failing us

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Stability AI’s open-model reputation doesn’t eliminate the risk that frontier-model training requires massive cash burn without a quick revenue engine.

Briefing

The most urgent theme running through these examples is that today’s “AI progress” often fails in ways that are either unsafe, financially unsustainable, or built on questionable incentives—raising the question of whether large language models are delivering real intelligence or just marketing momentum.

Stability AI sits near the top of that concern. Despite building widely used open image models such as Stable Diffusion, the company has struggled to raise additional funding at a reported $4 billion valuation. Its leadership has also signaled a shift in direction, with the founder and CEO planning to step down and arguing that centralized AI won’t be beaten by more centralized AI. The underlying problem is straightforward: training large foundation models requires massive cash burn, and unlike companies with steady revenue streams, Stability AI may not be able to fund the next leap without a rapid path to higher revenue. The stakes are existential—if it can’t monetize or secure funding quickly, the open-model ecosystem it helped popularize could stall.

The failures then move from business risk to everyday harm. A personal anecdote about Google’s Gemini illustrates how AI can produce dangerously wrong advice: the model suggested adding “non-toxic glue” to pizza sauce to increase tackiness, a recommendation the narrator treats as a health red flag. The same segment warns against asking AI about depression or homicidal feelings, implying that the system’s responses can be unreliable when emotional or high-risk topics are involved.

Privacy and consent issues also surface sharply in the discussion of Meta. Training competitive models requires large datasets, and Meta’s approach is framed as leveraging Facebook and Instagram user data by default for AI purposes. Opting out is described as intentionally cumbersome—requiring a form with a written explanation and a one-time password—creating friction that, in the narrator’s view, makes it harder for users to meaningfully refuse.

Two consumer products—Humane pin and Rabbit R1—are treated as emblematic of “AI hardware” hype that doesn’t justify its cost. Humane pin, backed by years of development and reported $230 million in stealth funding, is portrayed as struggling to find buyers, while Rabbit R1 is criticized as redundant because similar functionality can be done with a phone app.

Finally, the segment points to OpenAI’s GPT-5 announcement as a credibility problem. Training a “New Frontier Model” is framed as disappointing because it suggests AGI hasn’t been achieved internally as some observers expected after leadership turmoil. The discussion ties that to claims that Sam Altman misled the board and now leads a new 9-person Safety Committee—raising suspicion that “AI safety” rhetoric may be used to sustain hype, delay accountability, and keep regulatory pressure favorable.

Across all five cases, the throughline is skepticism: large language models may be useful tools, but the broader ecosystem—funding, safety, privacy, and consumer value—often looks misaligned with the promise of genuine intelligence.

Cornell Notes

The transcript argues that recent AI developments repeatedly fail on practical grounds: unsafe or nonsensical outputs, privacy-by-default data practices, and products that don’t deliver unique value. Stability AI is presented as financially fragile despite building major open models like Stable Diffusion, because training frontier models demands huge cash burn. Google’s Gemini is used as an example of AI giving harmful-sounding advice, while Meta is criticized for making AI data collection opt-out difficult. Consumer “AI devices” like Humane pin and Rabbit R1 are portrayed as redundant or overpriced. The segment ends by questioning OpenAI’s GPT-5 messaging and whether AGI timelines and “safety” narratives are being used to maintain hype and influence regulation.

Why is Stability AI portrayed as a high-risk player despite its open-model success?

Stability AI is described as more open than competitors and credited with strong open image models like Stable Diffusion, yet it reportedly failed to raise additional money at a $4 billion valuation. The founder and CEO plan to step down after arguing that centralized AI won’t be beaten by more centralized AI. The transcript emphasizes the economics: training large foundation models requires billions in cash to burn, and without a fast revenue path, the company could fail even if its models are popular.

What example is used to illustrate that AI can give dangerous or absurd advice?

A personal anecdote claims Google’s Gemini recommended adding an eighth of a cup of “non-toxic glue” to pizza sauce to make it tackier. The narrator treats this as a health hazard and warns against trusting AI on sensitive topics like depression or homicidal feelings. The point is that model output can be wrong in ways that matter to real-world safety.

How does the transcript characterize Meta’s approach to user data for AI training?

Meta is framed as wanting to use Facebook and Instagram user data to train models like Llama. The transcript claims Meta collects data for AI by default and makes opting out intentionally difficult: users must fill out a form with a written explanation, then request a one-time password before submitting. The criticism is that legal risk management and friction reduce the likelihood of meaningful opt-out.

What’s the critique of Humane pin and Rabbit R1?

Humane pin is described as a long-developed device (six years in stealth) that raised $230 million, yet the company is portrayed as struggling to find a buyer at a reported $1 billion target. Rabbit R1 is criticized as essentially duplicating what a phone app can do, making it feel like a “useless product.” The broader claim is that “AI hardware” hype can outpace real differentiation.

Why does the transcript question OpenAI’s GPT-5 announcement and AGI progress?

GPT-5 is described as a “New Frontier Model” meant to bring capabilities toward AGI, but the transcript reads that as implying AGI hasn’t been achieved internally. It connects this to leadership upheaval: Sam Altman’s firing and later return, plus claims from a former board member that Altman misled the board. The transcript also notes Altman’s role heading OpenAI’s new 9-person Safety Committee, suggesting “safety” messaging could be tied to maintaining hype and enabling regulatory capture.

Review Questions

  1. Which specific financial constraint is highlighted as the biggest threat to Stability AI’s survival?
  2. What mechanisms does the transcript claim make Meta’s opt-out process harder than it should be?
  3. How does the transcript connect leadership changes at OpenAI to skepticism about AGI timelines and safety narratives?

Key Points

  1. 1

    Stability AI’s open-model reputation doesn’t eliminate the risk that frontier-model training requires massive cash burn without a quick revenue engine.

  2. 2

    OpenAI’s GPT-5 messaging is treated as a credibility signal that AGI progress may be slower than public timelines implied.

  3. 3

    Google’s Gemini is used as an example of AI producing recommendations that could be unsafe or nonsensical in real-world contexts.

  4. 4

    Meta’s data-for-AI approach is criticized as opt-out-by-friction, relying on default collection plus a multi-step opt-out process.

  5. 5

    Humane pin and Rabbit R1 are framed as cases where “AI hardware” fails to justify cost or uniqueness compared with existing phone-based alternatives.

  6. 6

    The transcript links AI safety rhetoric to incentives that may sustain hype and influence regulation, including concerns about regulatory capture.

Highlights

Stability AI is portrayed as financially vulnerable: open image models like Stable Diffusion aren’t enough when training costs demand billions and fundraising stalls.
A Gemini anecdote claims the system suggested adding “non-toxic glue” to pizza sauce—an example used to argue AI outputs can be hazardous.
Meta’s opt-out process is described as intentionally cumbersome: a form with written explanation plus a one-time password before submission.
Humane pin and Rabbit R1 are treated as “AI device” hype with limited real differentiation from phone apps.
GPT-5 is framed as disappointing because it implies AGI still isn’t achieved internally, while leadership and safety-committee moves raise questions about messaging incentives.

Topics