Anthropic just released the real Claude Bot...
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Anthropic’s “Computer Use” is presented as an agentic system that can operate a Mac computer from a single prompt, including scheduling and app/web actions.
Briefing
Anthropic’s “Computer Use” release pushes Claude from chat into direct computer control: with a single prompt, Claude can open apps, schedule tasks, draft reports, and interact with web and desktop workflows—reportedly even when the user isn’t at the machine, by prompting from a phone. The practical implication is stark: once an AI can operate a computer autonomously, it can compress many day-to-day office actions into a single instruction, turning routine work into something closer to “delegation” than “assistance.”
The transcript frames this capability as a new lever for both productivity and misuse. It draws a line from older “salary arbitrage” and outsourcing schemes to a more automated version of white-collar fraud: instead of hiring someone to do the work, an LLM can be used to impersonate the worker’s actions—listening to meetings, writing code, and even taking follow-up steps like scheduling pull requests and checking bank deposits. The example workflow described is intentionally granular: Claude is said to draft cover letters, join a Zoom meeting via calendar and link automation, generate code within minutes, schedule a pull request at a specified time, and then verify a paycheck deposit and move funds.
That raises the central security question: how much access should an AI be granted, and under what safeguards. The transcript contrasts Anthropic’s approach with OpenClaw, an open-source personal assistant that runs locally and is model-agnostic, versus Computer Use being paid, closed source, Mac-only, and coupled to “clawed” models. It also cites a warning from Palo Alto Networks about a “dangerous combination” of private-data access, exposure to untrusted content, external communication while retaining memory, and the broader risk profile that comes with autonomy.
Still, the transcript notes that open-source tooling can also be risky if users don’t understand command-line operation, suggesting that safety depends not just on openness but on usability and guardrails. Computer Use is presented as a “permission-first” system: it asks before accessing new apps, and it only touches folders explicitly allowed by the user. In other words, the pitch is that autonomy is constrained by explicit consent boundaries.
The transcript also situates Computer Use inside a wider ecosystem of AI agents and web access. It briefly pivots to SER API, a sponsor offering live web search across more than 100 search engines with structured JSON results, positioning it as a way to avoid hallucinations when real-time information matters. Taken together, the message is that agentic systems are rapidly becoming capable of end-to-end workflows—so the real battleground is not just capability, but access control, auditability, and the incentives that determine whether these tools empower workers or enable abuse.
Cornell Notes
Anthropic’s “Computer Use” turns Claude into an agent that can control a Mac computer from a single prompt—opening apps, scheduling work, drafting outputs, and performing follow-up actions. The transcript contrasts this with OpenClaw, which is free/open source, runs locally, and is model-agnostic, but carries its own safety concerns. A Palo Alto Networks warning highlights risks when an AI can access private data, interact with untrusted content, communicate externally, and retain memory. Computer Use is described as permission-first, asking before accessing new apps and limiting file access to folders the user explicitly allows. The broader takeaway: autonomy plus web/computer access changes what LLMs can do, making guardrails and consent boundaries central to whether the tool is safe and trustworthy.
What does “Computer Use” enable Claude to do, and why does that matter beyond normal chat?
How does Computer Use differ from OpenClaw in the transcript’s comparison?
What specific risks are raised when LLMs get unrestricted computer access?
What safety mechanism does the transcript attribute to Computer Use?
How does the transcript illustrate misuse potential using an end-to-end workflow example?
Why does the transcript bring up SER API in the middle of the discussion?
Review Questions
- What changes when an LLM can control apps, scheduling, and web logins directly—compared with producing text responses only?
- Which risks in the transcript are tied to autonomy plus memory and external communication, and how do permission-first controls attempt to mitigate them?
- How do the transcript’s comparisons between OpenClaw and Computer Use suggest that safety depends on both access model (local vs vendor-controlled) and user operational competence?
Key Points
- 1
Anthropic’s “Computer Use” is presented as an agentic system that can operate a Mac computer from a single prompt, including scheduling and app/web actions.
- 2
Autonomy that spans meetings, coding, and account access can enable both productivity and fraud-style impersonation workflows.
- 3
The transcript contrasts OpenClaw (free/open source, local, model-agnostic) with Computer Use (paid, closed source, Mac OS only, coupled to “clawed” models).
- 4
Palo Alto Networks’ warning emphasizes risks when AI can access private data, interact with untrusted content, communicate externally, and retain memory.
- 5
Computer Use is described as permission-first, asking before accessing new apps and limiting file access to explicitly allowed folders.
- 6
SER API is introduced as a way to provide live web/search data in structured JSON to reduce hallucinations in real-time information tasks.