Make with Notion 2025: Shipping with Confidence in the Age of AI (Claire Vo)
Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Confidence in AI delivery comes from redesigning operating models and workflows, not from adopting AI tools in isolation.
Briefing
AI-native work is forcing organizations to rebuild how they ship—so confidence in the age of AI comes less from learning tools and more from redesigning the operating model, culture, and guardrails that make delivery repeatable. Claire Vo frames the shift as a “new baseline” where prototypes that once took months can now be built in minutes, turning the old constraints of headcount and capital into constraints of ideas and user problems. The practical question becomes: how do teams feel good about what they ship when the rules of product development, documentation, support, and marketing are all changing at once?
Vo argues that teams should treat AI as an end-to-end operating system rather than a set of add-on tricks. She recommends three building blocks. First is an operating model built from “toolkits” and explicit “flows”—repeatable processes that define where AI fits, what each workflow produces, and how work moves across functions. Her example “always on team flow” starts with a product manager defining the product and generating a prototype, then routes the work into coding via Cursor using MCP. From there, the workflow doesn’t stop at deployment: it automatically generates documentation, feeds that into support documentation, and then uses an AI agent to triage incoming support requests and feed prioritization back into the system. The key move is extending workflows beyond functional silos—product, design, engineering, marketing, and support—so teams can pull the workflow “as far as you can” through the real process of shipping and operating.
Second is culture, which Vo calls the hardest part and the root of why people feel uncertain. Confidence requires permission to experiment and a higher tolerance for failure, because disappointment with AI is common and often hidden. She emphasizes “permission to experiment” as something that must be backed by finance, legal, and security through a clear “golden path” for tool adoption—who can try what, how it gets paid for, and how quickly experimentation can happen. She also pushes “build in public by default,” using frequent, high-volume sharing of both wins and failures so teams gain context and organizational confidence rather than relying on private pockets of adoption.
Third is guardrails to prevent common traps. Vo warns against denial and waiting for AI to become “ready,” arguing that teams need a forcing function now—whether through operating model changes, cultural expectations, or even a “scary email.” She also calls out “secrets,” where strong AI adopters keep progress to themselves because they lack a scaffold for sharing. Another trap is “vibes only”: assuming AI will accelerate delivery without systems. Scaling requires hard skills—knowing how to prompt, select tools, and stitch workflows together—so teams can build repeatable processes instead of relying on sound bites.
Ultimately, Vo positions AI-native teams as a new competitive ground game. Organizations should craft an AI stack and strategy, design an AI-native version of their team, and learn through hands-on practice. Her closing challenges are concrete: design the AI-native team on paper and work backward to a plan, maintain a weekly personal anti-to-do list to reduce burnout and increase execution, and—crucially—protect the fun of creation and learning as a scalable cultural asset.
Cornell Notes
Confidence in the age of AI comes from rebuilding how work gets done, not from collecting AI tools. Claire Vo recommends an AI-native operating model built from explicit “flows” (repeatable end-to-end processes) and “toolkits” that define where AI fits across functions like product, engineering, documentation, support, and marketing. Culture is the hardest lever: teams need permission to experiment, higher risk tolerance for learning, and “build in public” sharing of both successes and failures. Guardrails matter to avoid denial, secretive pockets of adoption, and “vibes only” approaches that don’t replace systems with hard skills. The payoff is faster, higher-quality shipping and a competitive advantage as AI-native teams become the norm.
What does “toolkits and flows” mean in practice, and why does Vo insist teams start there instead of with tools?
How does Vo argue workflows should extend beyond functional roles?
What cultural changes does Vo say are required for confidence, and what makes them hard?
What does “build in public by default” look like, and why does it increase organizational confidence?
Which guardrails does Vo recommend to avoid common AI adoption traps?
How does Vo connect AI-native operating models to competition and learning?
Review Questions
- What are the differences between “learning tools” and “defining flows,” and how does Vo’s always-on example demonstrate that distinction?
- Why does Vo treat culture as the hardest lever for AI adoption, and what structural support does she say must come from finance, legal, and security?
- Which three traps does Vo warn against, and what forcing functions or scaffolds does she recommend to counter each one?
Key Points
- 1
Confidence in AI delivery comes from redesigning operating models and workflows, not from adopting AI tools in isolation.
- 2
Define repeatable end-to-end flows that specify how AI moves work across functions—from prototype to code to documentation to support to prioritization.
- 3
Extend workflows beyond role boundaries; treat AI as part of the organization’s full shipping and operating process, not a role-specific shortcut.
- 4
Build culture for experimentation by increasing risk tolerance for learning and making failure and success visible through frequent sharing.
- 5
Create a “golden path” for AI tool adoption with finance, legal, and security so experimentation is allowed, paid for, and accelerated.
- 6
Avoid denial and waiting by adding forcing functions that push teams to adopt and iterate now.
- 7
Replace “vibes only” with hard skills and systems: teams need practical prompting, tool selection, and workflow integration to scale.