Get AI summaries of any video or article — Sign up free
Opal - Google Labs Killer NEW App thumbnail

Opal - Google Labs Killer NEW App

Sam Witteveen·
5 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Opal is a Google Labs no-code workflow builder that converts natural-language requests into chained LLM steps for prototyping mini apps.

Briefing

Google Labs’ Opal is a no-code workflow builder aimed at turning natural-language requests into working LLM “mini apps,” with built-in steps for web research, text generation, and media creation. The core shift is that the boundary between what people configure and what they code is getting thinner: Opal chains prompts and model/tool calls into a reusable sequence, then lets users inspect and remix each step.

Opal arrives as a public preview from Google Labs, a team behind products such as NotebookLM. The transcript traces how similar tools emerged from the broader LLM ecosystem—moving from early frameworks like LangChain toward more agentic, multi-step systems—and argues that Opal is Google’s latest entry into the same category as n8n and Lindy. Unlike fully “hardcore agent” platforms, Opal focuses on prototyping: users can quickly assemble workflows, test them, and later translate the underlying prompts into code if they want.

The workflow process starts either by remixing examples from Opal’s gallery or by describing what to build. Opal then maps the request into a set of steps, showing intermediate outputs along the way. In the demo, a “blog post generator” workflow runs through web search, outline creation, full post writing, and banner image generation. A console view reveals which models and tools were used—for example, research calling Gemini 2.5 flash and subsequent generation using Gemini 2.0 flash—along with the retrieved web pages and the produced outline.

A key feature is step-level editing. The user can change which model powers a step (switching between Gemini 2.5 flash and Gemini 2.5 Pro, for instance) and swap image generation backends. The demo specifically replaces the banner image generator from a Gemini 2.0 flash image generator to Imagen 4, then adjusts the prompt wiring so the blog topic is embedded in the image prompt. Opal also supports additional user inputs: the user adds a “reader persona” input, threads it into the research and writing prompts, and reruns the workflow to produce a blog post tailored to an intended audience—here, an IT worker focused on automation.

Beyond remixing, Opal can generate workflows from scratch. The transcript describes creating a literature-review tool for uploaded archive papers, where Opal asks for an archive paper URL and a literature review topic, then builds a node graph to fetch and search related material. The user can insert further steps—such as using “deep research” with Gemini 2.5 flash to extract author information—illustrating how Opal can be extended into more complex pipelines.

The preview is US-only, with the transcript noting that a VPN may be needed to access it. Still, the takeaway is clear: Opal packages Google’s model and tool ecosystem into a visual, inspectable workflow system that non-coders can use immediately, while developers can later extract prompts and implement the same logic in code. The transcript frames this as an early version likely to expand over time, similar to how NotebookLM gained features after launch.

Cornell Notes

Opal from Google Labs is a no-code workflow builder that turns a plain-language request into a chain of LLM steps—research, writing, and media generation—so users can prototype “mini apps” without coding. Workflows can be remixed from a gallery or generated from scratch, then edited at the step level by changing models (e.g., Gemini 2.5 flash vs Gemini 2.5 Pro) and tools (e.g., swapping image generation to Imagen 4). Opal exposes intermediate outputs and shows which models were used, making it easier to debug and refine results. Adding extra inputs like a “reader persona” lets the same workflow produce tailored outputs for different audiences. The preview is US-only, but it’s positioned as a bridge between no-code experimentation and later code-based implementation.

What makes Opal different from earlier LLM app approaches like simple prompt wrappers or early frameworks?

Opal focuses on multi-step, inspectable workflows that chain together prompts and tool/model calls. Instead of only sending one prompt to an LLM, it maps a request into a sequence—such as web search → outline writing → full blog post writing → banner image generation—then shows intermediate outputs. That workflow graph can be saved and reused, and each step can be edited (including which model powers it).

How does Opal handle customization—both model choice and prompt wiring?

Customization happens at the step level. In the demo, research and writing steps use different Gemini variants (research used Gemini 2.5 flash, while writing used Gemini 2.0 flash). The user can switch models for a step (e.g., Gemini 2.5 flash to Gemini 2.5 Pro) and change the image generator backend. Prompt wiring is also editable: when switching to Imagen 4, the user updates the image prompt so the blog topic is passed into the generator, ensuring the banner contains the topic text.

What role do additional user inputs play in producing tailored outputs?

Opal supports multiple user inputs that can be threaded through different steps. The demo adds a “reader persona” input, then wires it into both the research step and the blog-writing prompt. When rerun with an IT-automation persona, the resulting outline and final post shift to match that audience, demonstrating how the same workflow can generate different outputs based on user-provided context.

How does Opal support building workflows from scratch, not just remixing templates?

Users can describe what they want to build, and Opal generates a node graph with required inputs. In the literature-review example, Opal creates a workflow that asks for an archive paper URL and a literature review topic, then performs searching and related-paper retrieval. The user can then add or modify nodes—such as inserting a “deep research” step using Gemini 2.5 flash to extract author information.

What built-in capabilities does Opal bundle for prototyping LLM apps?

Opal bundles common agentic workflow components: web search and page retrieval, text generation (outline and blog post writing), and media generation (banner images via image models like Imagen 4). It also supports audio-related generation in principle (the transcript mentions generating speech/audio for podcast-style outputs), showing that workflows can extend beyond text-only pipelines.

Review Questions

  1. How does Opal’s step-level editing change the workflow outcome compared with only changing a single top-level prompt?
  2. In the blog post demo, which models were used for research versus writing, and how did the user verify that in Opal?
  3. What inputs and wiring changes were needed to tailor the blog post to a specific reader persona?

Key Points

  1. 1

    Opal is a Google Labs no-code workflow builder that converts natural-language requests into chained LLM steps for prototyping mini apps.

  2. 2

    Workflows can be remixed from a gallery or generated from scratch, then saved and reused.

  3. 3

    Opal exposes intermediate outputs and shows which models/tools powered each step, making debugging and iteration faster.

  4. 4

    Step-level editing lets users swap model choices (e.g., Gemini 2.5 flash vs Gemini 2.5 Pro) and replace tools such as image generation backends.

  5. 5

    Adding extra inputs like a “reader persona” and wiring them into research and writing prompts enables audience-specific outputs.

  6. 6

    Opal bundles common workflow components—web research, outline/post generation, and image creation—so users can build more than text-only assistants.

  7. 7

    The preview is US-only, and access may require a VPN alongside a Google account.

Highlights

Opal turns a request like “make a blog post generator” into a multi-step workflow that performs web research, writes an outline, drafts the post, and generates a banner image.
A console view reveals the exact model usage per step—research used Gemini 2.5 flash while writing used Gemini 2.0 flash in the demo.
Switching the image generator to Imagen 4 required updating prompt wiring so the topic text flows into the image prompt.
Adding a “reader persona” input and threading it through research and writing produced a blog post tailored to an IT automation audience.
Opal can generate a literature-review workflow from a description, then users can extend it with additional nodes like deep research for author extraction.

Topics

  • Opal
  • Google Labs
  • No-Code LLM Workflows
  • Gemini Models
  • Imagen 4