Google Stitch Just Became an AI Figma (And It's Free)
Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Stitch has added an agentic design canvas that generates not just screens but a structured design system with colors, fonts, and component styling.
Briefing
Google Labs’ Stitch has shifted from simple screenshot-to-design experiments into an agentic, Figma-like workflow for generating full UI systems—complete with exportable design systems, instant prototypes, and code-ready outputs. The headline change is a new native design canvas powered by “design agents” that can build layouts from prompts and, crucially, pull styling context from existing websites. That matters because it turns design from a one-off mockup task into a repeatable pipeline: capture a brand’s look, generate a structured design system, iterate on screens, then export artifacts that plug into development tools.
At the core of the update is an agentic approach that blends capabilities associated with Gemini text models and image models. Stitch can spin up multiple design agents at once, letting users choose model tiers such as Gemini 3 flash and a Pro option. Instead of producing only visual screens, Stitch now generates a design system scaffold that includes primary and secondary color palettes, font selections, and styling details for UI elements like icons and buttons. It also introduces a “design.md” file—analogous to coding agents’ agents.md concept—that wraps a design-system toolkit. That file can be edited graphically in a theme editor and also exported as text for use in code editors or other workflows, making it easier to standardize brand guidelines across many projects.
One of the most practical features is the ability to pass a URL and have Stitch extract design standards from that site. Colors, fonts, and other visual cues become context for generating the new design system and design.md documentation. In the demo, a resort website in Thailand served as the source, and Stitch quickly produced a palette and typography that matched the reference site’s “vibe.” The tool also supports iterative page generation—such as creating separate pages for items found in a navigation bar—and then wiring those pages together so users can preview navigation and make targeted edits.
Stitch’s iteration loop extends beyond static designs. Users can generate instant prototypes, move between pages, and apply AI changes to specific elements. It can also produce multiple variations of a page based on design direction (for example, shifting toward a “more holistic natural food look”), generating placeholder imagery via an image model. For interaction, Stitch adds voice-driven “vibe design,” using a Gemini Live bidirectional model so users can talk to the interface while it updates the design.
Export options tie the design work directly into building. Stitch can export to AI Studio, generating code such as an X.js JS app and adding components like authentication and databases via prompts. It can also work with MCP and skills for coding-agent workflows. Other export paths include Figma and React, plus instant mockup prototypes. The workflow even generates a project brief resembling a product requirements document that includes the design system and palette. Stitch is positioned as a free, practical alternative for teams that want to move from reference sites to working UI quickly—without requiring deep design expertise—while still producing structured artifacts developers can reuse.
Cornell Notes
Google Labs’ Stitch has evolved into an agentic, Figma-like design system generator. It uses Gemini-powered design agents to create UI layouts from prompts and to extract styling context from a provided URL, producing a structured design system plus a “design.md” file. Users can iterate with instant prototypes, generate multiple screen variations, and even redesign by voice using Gemini Live. Exports connect directly to development workflows via AI Studio (code generation), with additional options like Figma and React. The practical impact is turning brand/reference inspiration into reusable design-system documentation and code-ready outputs.
What makes Stitch’s new workflow feel “Figma-like” rather than just a mockup generator?
How does Stitch use an existing website as design input?
What is “design.md,” and why does it matter for teams?
How do model choices affect the design output?
What iteration and interaction features go beyond static design?
How does Stitch move from design to code and other tools?
Review Questions
- When given a URL, what specific kinds of design information does Stitch extract and how is that used downstream?
- How does “design.md” function differently from a typical design export, and what team workflow does it enable?
- What are the main ways Stitch supports iteration—page wiring, targeted edits, variations, and voice—and how do those affect the speed of producing a usable prototype?
Key Points
- 1
Stitch has added an agentic design canvas that generates not just screens but a structured design system with colors, fonts, and component styling.
- 2
A new “design.md” file packages design-system rules for reuse across projects and for editing in both visual and text-based workflows.
- 3
Users can provide a URL so Stitch extracts design standards (like palette and typography) and uses them as context for generating a matching design system.
- 4
Stitch supports instant prototypes with wired navigation between pages, enabling rapid iteration and element-level edits.
- 5
Multiple design agents can run in parallel, with model choices such as Gemini 3 flash and a Pro model affecting output behavior.
- 6
Voice-driven “vibe design” uses Gemini Live bidirectional interaction so spoken instructions can update the design in real time.
- 7
Export options connect design to development via AI Studio (code generation), with additional paths to Figma and React.