Get AI summaries of any video or article — Sign up free
Google Stitch Just Became an AI Figma (And It's Free) thumbnail

Google Stitch Just Became an AI Figma (And It's Free)

Sam Witteveen·
5 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Stitch has added an agentic design canvas that generates not just screens but a structured design system with colors, fonts, and component styling.

Briefing

Google Labs’ Stitch has shifted from simple screenshot-to-design experiments into an agentic, Figma-like workflow for generating full UI systems—complete with exportable design systems, instant prototypes, and code-ready outputs. The headline change is a new native design canvas powered by “design agents” that can build layouts from prompts and, crucially, pull styling context from existing websites. That matters because it turns design from a one-off mockup task into a repeatable pipeline: capture a brand’s look, generate a structured design system, iterate on screens, then export artifacts that plug into development tools.

At the core of the update is an agentic approach that blends capabilities associated with Gemini text models and image models. Stitch can spin up multiple design agents at once, letting users choose model tiers such as Gemini 3 flash and a Pro option. Instead of producing only visual screens, Stitch now generates a design system scaffold that includes primary and secondary color palettes, font selections, and styling details for UI elements like icons and buttons. It also introduces a “design.md” file—analogous to coding agents’ agents.md concept—that wraps a design-system toolkit. That file can be edited graphically in a theme editor and also exported as text for use in code editors or other workflows, making it easier to standardize brand guidelines across many projects.

One of the most practical features is the ability to pass a URL and have Stitch extract design standards from that site. Colors, fonts, and other visual cues become context for generating the new design system and design.md documentation. In the demo, a resort website in Thailand served as the source, and Stitch quickly produced a palette and typography that matched the reference site’s “vibe.” The tool also supports iterative page generation—such as creating separate pages for items found in a navigation bar—and then wiring those pages together so users can preview navigation and make targeted edits.

Stitch’s iteration loop extends beyond static designs. Users can generate instant prototypes, move between pages, and apply AI changes to specific elements. It can also produce multiple variations of a page based on design direction (for example, shifting toward a “more holistic natural food look”), generating placeholder imagery via an image model. For interaction, Stitch adds voice-driven “vibe design,” using a Gemini Live bidirectional model so users can talk to the interface while it updates the design.

Export options tie the design work directly into building. Stitch can export to AI Studio, generating code such as an X.js JS app and adding components like authentication and databases via prompts. It can also work with MCP and skills for coding-agent workflows. Other export paths include Figma and React, plus instant mockup prototypes. The workflow even generates a project brief resembling a product requirements document that includes the design system and palette. Stitch is positioned as a free, practical alternative for teams that want to move from reference sites to working UI quickly—without requiring deep design expertise—while still producing structured artifacts developers can reuse.

Cornell Notes

Google Labs’ Stitch has evolved into an agentic, Figma-like design system generator. It uses Gemini-powered design agents to create UI layouts from prompts and to extract styling context from a provided URL, producing a structured design system plus a “design.md” file. Users can iterate with instant prototypes, generate multiple screen variations, and even redesign by voice using Gemini Live. Exports connect directly to development workflows via AI Studio (code generation), with additional options like Figma and React. The practical impact is turning brand/reference inspiration into reusable design-system documentation and code-ready outputs.

What makes Stitch’s new workflow feel “Figma-like” rather than just a mockup generator?

Stitch now includes a native design canvas and an agentic design system workflow: it generates a structured UI design plus a design system (colors, fonts, and component styling), wires pages together for navigation, and supports instant prototypes. Instead of only producing screens, it produces a reusable design system artifact via a “design.md” file that can be edited visually and exported for other tools.

How does Stitch use an existing website as design input?

Users can pass a URL, and Stitch pulls design standards from that site—such as primary/secondary color choices and fonts—and uses those as context for generating the new design system and design.md. In the demo, a Thailand resort site was used as the reference, and Stitch quickly matched the palette and typography to the site’s look.

What is “design.md,” and why does it matter for teams?

“design.md” wraps a design-system toolkit and acts like a portable design specification. It can be edited graphically through a theme editor, but it can also be exported as text so teams can reuse the same design rules across multiple designs and projects. That makes it easier to standardize brand guidelines and keep design consistent over time.

How do model choices affect the design output?

Stitch lets users select different model tiers for the design agents, including Gemini 3 flash and a Pro model. The demo suggests flash can produce strong results quickly, while Pro may be more deterministic. The practical takeaway is that users can experiment with model selection to balance speed, variability, and consistency.

What iteration and interaction features go beyond static design?

Stitch supports instant prototypes where users can preview navigation between pages and make targeted edits. It can generate multiple variations of a page based on new design direction (e.g., a “holistic natural food look”), producing different screen options and placeholder imagery. It also adds voice-driven “vibe design” using a Gemini Live bidirectional model, enabling spoken instructions that update the interface as changes are made.

How does Stitch move from design to code and other tools?

Stitch exports directly into development workflows. It can export to AI Studio, where prompts can turn Stitch’s design into code (including options like adding authentication and a database). It can also integrate with MCP and skills for coding-agent workflows, and it retains export paths to Figma and React, plus instant mockup prototypes. AI Studio can receive both HTML and image assets and generate a fuller implementation based on additional prompts.

Review Questions

  1. When given a URL, what specific kinds of design information does Stitch extract and how is that used downstream?
  2. How does “design.md” function differently from a typical design export, and what team workflow does it enable?
  3. What are the main ways Stitch supports iteration—page wiring, targeted edits, variations, and voice—and how do those affect the speed of producing a usable prototype?

Key Points

  1. 1

    Stitch has added an agentic design canvas that generates not just screens but a structured design system with colors, fonts, and component styling.

  2. 2

    A new “design.md” file packages design-system rules for reuse across projects and for editing in both visual and text-based workflows.

  3. 3

    Users can provide a URL so Stitch extracts design standards (like palette and typography) and uses them as context for generating a matching design system.

  4. 4

    Stitch supports instant prototypes with wired navigation between pages, enabling rapid iteration and element-level edits.

  5. 5

    Multiple design agents can run in parallel, with model choices such as Gemini 3 flash and a Pro model affecting output behavior.

  6. 6

    Voice-driven “vibe design” uses Gemini Live bidirectional interaction so spoken instructions can update the design in real time.

  7. 7

    Export options connect design to development via AI Studio (code generation), with additional paths to Figma and React.

Highlights

Stitch can generate a full design system from a single reference URL, turning existing brand/style sites into structured design context.
The “design.md” artifact bridges design and engineering by packaging theme and design-system details in a reusable format.
Voice-driven “vibe design” brings Gemini Live bidirectional interaction into the design workflow, enabling spoken iteration.
Exporting to AI Studio can translate Stitch’s generated UI into code workflows, including prompts for authentication and databases.

Topics

Mentioned