Get AI summaries of any video or article — Sign up free
OpenAI just destroyed 100 startups… yours is next thumbnail

OpenAI just destroyed 100 startups… yours is next

David Ondrej·
5 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

OpenAI’s platform strategy is framed as turning ChatGPT into a distribution layer for third-party apps via the apps SDK and in-chat discovery.

Briefing

OpenAI’s latest push is aimed at turning ChatGPT from a chat interface into the default place where people work, shop, and build—by bundling an “apps SDK” for third-party apps, an “agent kit” for AI workflows, and “chatkit” for embedding interactive UI inside other websites and apps. The stakes are economic as well as technical: the strategy centers on making OpenAI-powered experiences discoverable inside ChatGPT (and usable everywhere via SDKs), then replacing fast-growing startups with near-identical offerings once they hit product-market fit.

The core mechanism described is a two-step pattern. First, OpenAI makes its models broadly available through the OpenAI API, enabling startups to build features and services on top of OpenAI-powered capabilities. Second, OpenAI monitors what gains traction—especially products showing strong revenue growth and rapid adoption—then builds its own version of the same service. The framing is blunt: startups that succeed become targets, because OpenAI can move quickly from “model provider” to “product competitor,” leveraging distribution inside ChatGPT.

Distribution is where the new tooling matters most. The apps SDK is presented as a full-stack way to connect data, trigger actions, and render interactive UI, with the explicit promise that apps built with it can reach “hundreds of millions” of ChatGPT users. Apps become discoverable directly in conversation—users can ask for an app by name, and ChatGPT can recommend relevant apps when someone requests a task (for example, asking for a workflow and then having “Figma” turn a sketch into a diagram). The implication is that users won’t need to leave ChatGPT to complete tasks like booking travel; the “chat” becomes the operating surface.

Agent kit extends that platform ambition by making it easier to build agentic workflows—systems that can plan steps, call tools, and run through multi-stage tasks. The transcript emphasizes speed and scale: agent kit is described as built in six weeks, and it’s positioned as a canvas for moving from prototype to production. It also ties into other OpenAI components, including chatkit widgets (custom UI elements inside chat experiences) and Codex, OpenAI’s coding agent. Codex is upgraded again, now running on a GPT5 Codex model trained for coding and agentic work, with features like code refactoring, code review, and dynamic adjustment of “thinking time.”

The transcript also argues that OpenAI’s momentum will reshape labor and entertainment. It lists roles likely to be automated—camera operators, light technicians, translators, radiologists, teachers, real estate agents, and tour guides—and claims AI-generated media will become indistinguishable from real content as models like “Sora 2” reach the API. In that environment, the advice is to pivot toward education and higher-value content that’s harder to replicate.

Finally, the transcript gets practical: it walks through using agent kit via the OpenAI platform dashboard, starting from templates like “data enrichment,” and shows how workflows are assembled from nodes (agents, tools, output formats) with guardrails, structured outputs (text or JSON), and integrations via MCP. It highlights that chatkit’s UI components include open-source JavaScript components under the Apache 2.0 license, while core infrastructure remains proprietary. Overall, the message is that OpenAI is not just improving models—it’s building the distribution layer, the development layer, and the agent layer that together can absorb entire categories of startups and workflows.

Cornell Notes

OpenAI’s platform push aims to make ChatGPT the default “operating surface” for apps and AI agents, not just a text chatbot. The transcript describes a repeatable competitive pattern: OpenAI enables startups via the OpenAI API, watches which products hit product-market fit with fast revenue growth, then builds competing versions. The apps SDK is framed as a full-stack way to create interactive apps discoverable inside ChatGPT, while agent kit provides a visual canvas for building agentic workflows that can move from prototype to production. Codex upgrades (including a GPT5 Codex model) and chatkit widgets support richer, tool-using experiences. The practical takeaway is that these tools lower the barrier to building and deploying agents, even for non-developers, while increasing pressure on startups that rely on OpenAI-powered differentiation.

What competitive strategy is described for how OpenAI can “replace” startups?

The transcript outlines a two-step loop. OpenAI makes its models available through the OpenAI API, letting startups build features and services on top of OpenAI capabilities. OpenAI then monitors what others build—especially products that reach product-market fit and show the fastest revenue growth—and builds its own version of the same service, using ChatGPT distribution to capture users.

How do the apps SDK and chat discovery features change where users complete tasks?

The apps SDK is presented as a full-stack mechanism to connect data, trigger actions, and render interactive UI, with apps discoverable inside ChatGPT conversations. Instead of leaving ChatGPT to use sites like booking.com, users can ask ChatGPT for an app by name and get recommendations in-line, keeping the user inside the chat experience while actions execute through the app.

What does agent kit add beyond “chat,” and why does the transcript emphasize speed?

Agent kit is described as a set of building blocks for agentic workflows—logic steps that can call tools, run multi-stage tasks, and move from prototype to production. The transcript highlights that agent kit was built in six weeks, and that it integrates with other OpenAI components (like chatkit widgets and Codex) to support end-to-end agent behavior rather than single-turn responses.

What role does Codex play in the ecosystem described here?

Codex is framed as OpenAI’s coding agent, upgraded to run on a GPT5 Codex model trained for coding and agentic work. The transcript claims improvements for code refactoring and code review, plus dynamic adjustment of thinking time. It also mentions a Codex SDK and describes Codex as capable of tasks like pull request reviews and long-running refactors, including asynchronous work across many tasks.

How does chatkit relate to UI and widgets inside AI experiences?

Chatkit is described as an interface layer that embeds ChatGPT-like chat experiences into your own apps or websites, including customizable widgets (interactive UI elements) that can appear inside the chat. The transcript argues this moves beyond plain text toward richer modalities and structured UI components, and it notes that some chatkit JavaScript components are released under the Apache 2.0 license while core infrastructure remains proprietary.

What integration mechanism is highlighted for connecting many external tools and apps?

The transcript emphasizes MCP via an MCP node inside agent kit. It claims that adding a specific MCP server (named “RP” in the transcript) can connect to “over 500” apps through a single connector, enabling workflows like pulling a marketing deck from Google Drive into Slack or drafting follow-up emails via Gmail tools.

Review Questions

  1. How does the described “API-first, then replace” pattern work, and what kinds of startups are most targeted?
  2. What are the distinct roles of apps SDK, agent kit, and chatkit in turning ChatGPT into a platform?
  3. In the agent kit workflow templates, what kinds of nodes and settings (tools, output formats, guardrails) are used to control agent behavior?

Key Points

  1. 1

    OpenAI’s platform strategy is framed as turning ChatGPT into a distribution layer for third-party apps via the apps SDK and in-chat discovery.

  2. 2

    A recurring competitive pattern is described: startups build on the OpenAI API, then OpenAI builds competing versions once products show product-market fit and fast revenue growth.

  3. 3

    Apps SDK is positioned as full-stack app creation that can connect data, trigger actions, and render interactive UI inside ChatGPT.

  4. 4

    Agent kit is presented as a visual canvas for building agentic workflows, with templates and tooling aimed at moving from prototype to production quickly.

  5. 5

    Codex upgrades (including a GPT5 Codex model) are highlighted as enabling more capable coding agents, supported by an SDK for extending automation.

  6. 6

    Chatkit is described as embedding interactive chat experiences and widgets into external apps and websites, with some UI components released under the Apache 2.0 license.

  7. 7

    MCP integration is used to connect agent workflows to many external apps through a single MCP server connector (named “RP” in the transcript).

Highlights

The apps SDK is pitched as making ChatGPT function like an app store, where users can discover and run apps without leaving the chat.
Agent kit is described as a six-week build that provides a canvas for agentic workflows, including templates and tool orchestration.
Codex is upgraded to run on a GPT5 Codex model trained for coding and agentic tasks, with capabilities like refactoring and code review.
Chatkit widgets shift AI experiences from plain text toward interactive UI elements embedded directly in chat interfaces.

Topics

Mentioned