The Rabbit Is A Scam
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Rabbit R1’s lamb was marketed as a foundational “words to action” system, but critics claim observed behavior looks brittle and often fails in real integrations like DoorDash.
Briefing
Rabbit R1’s “lamb” AI pitch—an on-device system that can turn requests into real actions across apps—has come under intense scrutiny after repeated attempts to replicate its advertised behavior repeatedly failed, and after code-level claims suggested the core automation is largely off-the-shelf tooling rather than a new, foundational model.
The central promise was that lamb could “bring AI from words to action,” handling tasks like organizing daily routines, messaging friends, restocking groceries, and even navigating websites to complete purchases or bookings. In practice, the device’s integrations were described as brittle and often nonfunctional. When the system tried to place orders through DoorDash, it returned “under maintenance,” and other demos were portrayed as relying on rigid, pre-scripted steps that break when interfaces change. The transcript repeatedly returns to a key mismatch: large language models are good at generating text, but they struggle with precise, step-by-step actions—especially when apps redesign screens, introduce pop-ups, or vary flows across users.
A major allegation is that lamb isn’t a new AI model at all, but a wrapper around existing systems—particularly ChatGPT—paired with hardcoded automation scripts. The transcript claims the “action” layer is implemented using Playwright (a web automation framework) to simulate clicks and navigation, while the language layer handles the prompt-to-text side. That distinction matters because a script that works on one version of a site can fail when the UI shifts, when A/B tests change layouts, or when captchas appear. The transcript also argues that the device’s behavior can’t be reliably verified as “intelligent” web control, because the observed actions look like deterministic automation rather than a model that truly understands the interface.
Beyond functionality, the transcript raises concerns about transparency and marketing accuracy. It describes instructions allegedly embedded in the system prompt that prevent the device from stating it uses OpenAI’s models, and it claims lamb’s “faster than ChatGPT” messaging is misleading because much of the experience is still based on ChatGPT, with other services used for search. It also alleges that lamb is treated as a “marketing term,” with an anonymous employee reportedly saying the advertised lamb capability doesn’t exist as described.
Security and privacy concerns are another thread. The transcript claims the system tracks precise geographic location, that data-handling practices are questionable, and that parts of the backend are fragile enough to expose user conversations if compromised. It also alleges that the cloud environment used to run tasks can be accessed in ways that allow unrelated software (like Doom) to run, and it cites code flags related to captchas—suggesting the system may pause for human solving or rely on manual workarounds during demos.
Taken together, the transcript frames Rabbit R1 as an overpromised consumer automation device: a $200 product justified by lamb’s supposed “large action model” breakthrough, but allegedly built on a combination of existing LLMs and Playwright scripts that struggle with real-world website variability, captchas, and integration maintenance. The dispute ends with a company response emphasizing patents, microservices-based segregation, and a focus on customer data protection—while critics argue the gap between what was sold and what was delivered remains unresolved.
Cornell Notes
Rabbit R1’s “lamb” is marketed as a foundational AI that can convert requests into actions across apps and websites. Critics say the promised capability doesn’t match observed behavior: integrations fail, and website control appears to rely on hardcoded Playwright-style automation scripts rather than a truly adaptive “large action model.” The transcript also alleges lamb is largely a wrapper around existing systems (including ChatGPT) plus automation, with messaging that may obscure what models are actually used. Security and privacy concerns are raised as well, including claims about location tracking, backend fragility, and how captchas are handled. The practical takeaway is that brittle automation and unclear model transparency can undermine trust even when the interface feels “AI-powered.”
What’s the key gap between lamb’s marketing and what critics claim it can actually do?
Why do captchas and UI changes matter so much for “AI that controls websites”?
What does the transcript claim about lamb’s relationship to ChatGPT and other tools?
How does the transcript connect the device’s “web control” to Playwright?
What privacy and security concerns are raised beyond “it doesn’t work as advertised”?
What’s the broader lesson the transcript draws about AI hype and product claims?
Review Questions
- What specific technical distinction does the transcript emphasize between language generation and reliable action execution on websites?
- How would a step-by-step automation approach fail under A/B testing or UI redesign, and what role do captchas play in that failure mode?
- What transparency issues does the transcript allege about how lamb uses existing models, and why would those omissions matter to consumers?
Key Points
- 1
Rabbit R1’s lamb was marketed as a foundational “words to action” system, but critics claim observed behavior looks brittle and often fails in real integrations like DoorDash.
- 2
The transcript repeatedly argues that reliable website control is implemented via hardcoded Playwright-style automation scripts rather than a truly adaptive large action model.
- 3
UI changes, A/B tests, pop-ups, and captchas are presented as core reasons rigid automation breaks, undermining the “AI that understands the page” claim.
- 4
Multiple allegations focus on transparency: lamb is claimed to be a wrapper around existing LLMs (including ChatGPT) plus automation, with messaging that may obscure what models are actually used.
- 5
Security and privacy concerns are raised, including claims about location tracking, backend fragility, and how captchas are handled (including possible human-in-the-loop workarounds).
- 6
The dispute centers on a perceived gap between what was sold ($200 device justified by lamb’s promised breakthrough) and what critics say was delivered (off-the-shelf components and scripts).
- 7
Rabbit’s response emphasizes patents, microservices segregation, and data protection, while critics argue the core capability claims remain unsupported.