Get AI summaries of any video or article — Sign up free
Make with Notion 2025: Shipping with Confidence in the Age of AI (Claire Vo) thumbnail

Make with Notion 2025: Shipping with Confidence in the Age of AI (Claire Vo)

Notion·
6 min read

Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Confidence in AI delivery comes from redesigning operating models and workflows, not from adopting AI tools in isolation.

Briefing

AI-native work is forcing organizations to rebuild how they ship—so confidence in the age of AI comes less from learning tools and more from redesigning the operating model, culture, and guardrails that make delivery repeatable. Claire Vo frames the shift as a “new baseline” where prototypes that once took months can now be built in minutes, turning the old constraints of headcount and capital into constraints of ideas and user problems. The practical question becomes: how do teams feel good about what they ship when the rules of product development, documentation, support, and marketing are all changing at once?

Vo argues that teams should treat AI as an end-to-end operating system rather than a set of add-on tricks. She recommends three building blocks. First is an operating model built from “toolkits” and explicit “flows”—repeatable processes that define where AI fits, what each workflow produces, and how work moves across functions. Her example “always on team flow” starts with a product manager defining the product and generating a prototype, then routes the work into coding via Cursor using MCP. From there, the workflow doesn’t stop at deployment: it automatically generates documentation, feeds that into support documentation, and then uses an AI agent to triage incoming support requests and feed prioritization back into the system. The key move is extending workflows beyond functional silos—product, design, engineering, marketing, and support—so teams can pull the workflow “as far as you can” through the real process of shipping and operating.

Second is culture, which Vo calls the hardest part and the root of why people feel uncertain. Confidence requires permission to experiment and a higher tolerance for failure, because disappointment with AI is common and often hidden. She emphasizes “permission to experiment” as something that must be backed by finance, legal, and security through a clear “golden path” for tool adoption—who can try what, how it gets paid for, and how quickly experimentation can happen. She also pushes “build in public by default,” using frequent, high-volume sharing of both wins and failures so teams gain context and organizational confidence rather than relying on private pockets of adoption.

Third is guardrails to prevent common traps. Vo warns against denial and waiting for AI to become “ready,” arguing that teams need a forcing function now—whether through operating model changes, cultural expectations, or even a “scary email.” She also calls out “secrets,” where strong AI adopters keep progress to themselves because they lack a scaffold for sharing. Another trap is “vibes only”: assuming AI will accelerate delivery without systems. Scaling requires hard skills—knowing how to prompt, select tools, and stitch workflows together—so teams can build repeatable processes instead of relying on sound bites.

Ultimately, Vo positions AI-native teams as a new competitive ground game. Organizations should craft an AI stack and strategy, design an AI-native version of their team, and learn through hands-on practice. Her closing challenges are concrete: design the AI-native team on paper and work backward to a plan, maintain a weekly personal anti-to-do list to reduce burnout and increase execution, and—crucially—protect the fun of creation and learning as a scalable cultural asset.

Cornell Notes

Confidence in the age of AI comes from rebuilding how work gets done, not from collecting AI tools. Claire Vo recommends an AI-native operating model built from explicit “flows” (repeatable end-to-end processes) and “toolkits” that define where AI fits across functions like product, engineering, documentation, support, and marketing. Culture is the hardest lever: teams need permission to experiment, higher risk tolerance for learning, and “build in public” sharing of both successes and failures. Guardrails matter to avoid denial, secretive pockets of adoption, and “vibes only” approaches that don’t replace systems with hard skills. The payoff is faster, higher-quality shipping and a competitive advantage as AI-native teams become the norm.

What does “toolkits and flows” mean in practice, and why does Vo insist teams start there instead of with tools?

Vo draws a line between learning tools and defining repeatable processes. Teams should first map the playbooks they already run—daily, weekly, monthly—and then specify how AI slots into those flows. Her “always on team flow” illustrates the idea: a PM defines the product and generates a prototype, then the work moves into coding via Cursor using MCP. After deployment, the workflow continues automatically into documentation, then support documentation, where an AI agent triages requests and feeds prioritization back into the system. The point is that AI becomes part of the organization’s operating rhythm, not an ad hoc add-on.

How does Vo argue workflows should extend beyond functional roles?

Vo criticizes stopping at a functional slice—e.g., “how can a designer use AI?” or “how can an engineer use AI?” Instead, she urges teams to pull workflows through the entire process of shipping and operating. In her example, the flow doesn’t end at code deployment; it generates docs, routes content into support, and uses support inputs to prioritize future work. That end-to-end extension turns AI into an organizational system rather than a role-specific trick.

What cultural changes does Vo say are required for confidence, and what makes them hard?

Vo says culture drives confidence because people often feel uncertain when experimentation is discouraged. She calls permission to experiment the first requirement, paired with a higher tolerance for failure and a “novice mindset” where teams learn openly. The hard part is that culture is entrenched and difficult to change without structural support. She adds that finance, legal, and security must provide a “golden path” for tool adoption—who can try tools, how they’re paid for, and how quickly experimentation can proceed—otherwise experimentation stays blocked.

What does “build in public by default” look like, and why does it increase organizational confidence?

Vo frames build-in-public as frequent, high-volume sharing of both failures and successes through shared channels. She argues this creates context about what’s working and what isn’t, reducing hidden uncertainty. She also contrasts it with the “told-you-so” mindset: instead of naysaying AI attempts, teams should share results so others can learn. The cultural mechanism is that visibility turns individual experiments into collective learning.

Which guardrails does Vo recommend to avoid common AI adoption traps?

Vo highlights three traps: denial/waiting, secrets, and “vibes only.” Denial is the riskiest move, so teams need a forcing function—through operating model, culture, or external pressure—to prevent “18 months later” regret. Secrets happen when strong adopters keep work private because there’s no scaffold for sharing; Vo says frameworks must be intentional so knowledge gets seated into the organization. Vibes only is the belief that AI acceleration will happen without systems; Vo argues scaling requires hard skills to prompt, choose tools, and connect workflows into repeatable processes.

How does Vo connect AI-native operating models to competition and learning?

Vo argues organizations will increasingly compete on operating model “ground game,” not just product ideas. Teams should design an AI stack and strategy, craft an AI-native version of their team, and work backward into a plan. She also emphasizes hands-on learning: executives and teams should practice prompting, tool selection, and workflow building rather than relying on lectures or generic predictions.

Review Questions

  1. What are the differences between “learning tools” and “defining flows,” and how does Vo’s always-on example demonstrate that distinction?
  2. Why does Vo treat culture as the hardest lever for AI adoption, and what structural support does she say must come from finance, legal, and security?
  3. Which three traps does Vo warn against, and what forcing functions or scaffolds does she recommend to counter each one?

Key Points

  1. 1

    Confidence in AI delivery comes from redesigning operating models and workflows, not from adopting AI tools in isolation.

  2. 2

    Define repeatable end-to-end flows that specify how AI moves work across functions—from prototype to code to documentation to support to prioritization.

  3. 3

    Extend workflows beyond role boundaries; treat AI as part of the organization’s full shipping and operating process, not a role-specific shortcut.

  4. 4

    Build culture for experimentation by increasing risk tolerance for learning and making failure and success visible through frequent sharing.

  5. 5

    Create a “golden path” for AI tool adoption with finance, legal, and security so experimentation is allowed, paid for, and accelerated.

  6. 6

    Avoid denial and waiting by adding forcing functions that push teams to adopt and iterate now.

  7. 7

    Replace “vibes only” with hard skills and systems: teams need practical prompting, tool selection, and workflow integration to scale.

Highlights

Vo’s always-on flow treats AI as an end-to-end operating system: prototype generation → coding via Cursor using MCP → automatic documentation → support triage by an AI agent → prioritized work back into the pipeline.
Culture is the confidence bottleneck: permission to experiment must be backed by finance, legal, and security through a clear adoption “golden path.”
The biggest adoption traps are denial, secretive pockets of progress, and “vibes only”—all of which can be countered with forcing functions, sharing scaffolds, and hard-skill workflow building.

Topics

  • AI-Native Operating Model
  • Workflow Automation
  • Culture and Experimentation
  • Guardrails and Adoption
  • Hard Skills for AI

Mentioned