OpenAI DevDay 2025: Opening Keynote with Sam Altman
Based on OpenAI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
OpenAI is turning ChatGPT into an app platform by launching the Apps SDK (preview), built on MCP for full control of backend logic and UI.
Briefing
OpenAI’s DevDay 2025 opening keynote makes one central pitch: building useful AI products is getting dramatically easier—because ChatGPT is turning into an app platform, agents are becoming production-ready with less glue code, and coding itself is shifting toward AI-driven workflows across tools, teams, and devices.
The keynote starts with scale and momentum. In 2023, OpenAI cited 2 million developers, 100 million weekly ChatGPT users, and roughly 300 million tokens per minute on its API. By DevDay 2025, it claims 4 million developers, more than 800 million weekly ChatGPT users, and over 6 billion tokens per minute on the API. That growth sets up the day’s focus: tools that shorten the path from idea to working product.
First up is “apps inside ChatGPT.” OpenAI says it’s launching a new Apps SDK (preview) that lets developers build interactive, adaptive, personalized experiences directly within ChatGPT conversations. The SDK is built on MCP, giving developers control over backend logic and frontend UI, and it’s designed to make apps discoverable and scalable—developers can reach “hundreds of millions” of ChatGPT users. The keynote also ties monetization to distribution: users can log in to existing products from within the conversation, and future monetization includes an “agentic commerce protocol” enabling instant checkout inside ChatGPT.
Concrete examples show how discovery and context work. Developers can surface apps by name (e.g., uploading a sketch and asking “Figma, turn this sketch into a workable diagram”), and ChatGPT can recommend apps when users ask for outcomes (e.g., suggesting Spotify for a party playlist). A live demo highlights inline media and interaction: a Coursera app plays video inside the chat, then uses an Apps SDK mechanism (“talking to apps”) to expose the user’s current context back to ChatGPT so the model can answer questions about what’s on screen. Other demos show Canva generating portfolios and pitch decks from conversation context, and a Zillow app embedding an interactive map that updates without creating a new instance—then filtering results and answering follow-up questions using the context the app provides.
Next, the keynote tackles agents—where prototypes often fail to reach production. OpenAI introduces Agentkit, positioned as a complete set of building blocks to move from prototype to production with less orchestration overhead. Agent Builder provides a visual way to design logic steps and flows on top of the Responses API. Chatkit offers an embeddable chat interface for custom branding and workflows. Evals for Agents adds trace grading, datasets, and automated prompt optimization, including the ability to run evals on external models within OpenAI’s eval platform. For data access, OpenAI points to a connector registry with an admin control panel to connect internal and third-party systems safely.
The keynote then demonstrates Agentkit’s speed by building a DevDay navigation agent (“Ask Froge”) live—using visual workflow wiring, prebuilt guardrails (including PII protection), widgets, previewing, and publishing to production with a workflow ID. The agent is then embedded into the DevDay site using Chatkit components, with the ability to iterate without rewriting code.
Finally, the keynote shifts to software creation. CodeX moves from research preview to general availability, now running on the GPT5-Codex model trained for agentic coding tasks like refactoring and code review. OpenAI cites rapid adoption (10x daily messages since early August) and internal impact (engineers completing more pull requests; nearly all PRs reviewed by CodeX). New enterprise features include Slack integration, a CodeX SDK for team workflows, and admin tools for monitoring and analytics.
A live demo shows CodeX controlling real devices and building interfaces: a camera control panel, a wireless controller workflow, and voice-driven lighting changes using MCP servers and the agentic toolchain. The keynote closes by expanding the model lineup for developers and creators: GPT5 Pro in the API for hard domains, GPT-REALTIME-MINI for cheaper expressive voice, and a Sora 2 preview in the API with more controllability, sound pairing, and remixing options—plus examples involving product concepting and toy design.
Taken together, the message is that AI product development is shifting from experimentation to deployment: apps become conversational UI, agents become measurable and safer, and coding becomes a collaborative, tool-using workflow that can span IDEs, terminals, Slack, and real-world devices.
Cornell Notes
OpenAI’s DevDay 2025 opening keynote argues that AI builders are entering a faster, more production-ready era. It introduces the Apps SDK (preview) for building interactive apps inside ChatGPT, built on MCP so developers control both backend logic and UI while reaching large ChatGPT audiences. It also launches Agentkit to reduce the complexity of agent orchestration, adding visual workflow building, embeddable chat interfaces, agent-specific evals, and safer data connections via a connector registry. CodeX moves from research preview to general availability on GPT5-Codex, with new team features like Slack integration and enterprise admin tools. The keynote ties these platform upgrades to model releases (GPT5 Pro, GPT-REALTIME-MINI, and Sora 2 API preview) to expand what developers and creators can ship.
What does “apps inside ChatGPT” change for developers, and how does the Apps SDK fit in?
How does context flow between a user’s interaction and ChatGPT when using an app like Coursera or Zillow?
Why does Agentkit focus on production readiness, and what components does it include?
What did the live “Ask Froge” demo demonstrate about building agents quickly?
How is CodeX positioned differently now that it’s generally available, and what’s new for teams?
What do the model updates (GPT5 Pro, GPT-REALTIME-MINI, and Sora 2 API preview) aim to enable?
Review Questions
- How do Apps SDK and MCP together enable both UI control and backend logic control for apps embedded in ChatGPT?
- What specific Agentkit components address workflow design, user-facing chat embedding, and agent evaluation/optimization?
- In what ways does GPT5-Codex (CodeX) integrate into developer workflows and team processes (e.g., IDEs, Slack, admin tools)?
Key Points
- 1
OpenAI is turning ChatGPT into an app platform by launching the Apps SDK (preview), built on MCP for full control of backend logic and UI.
- 2
Apps inside ChatGPT are designed for inline, interactive experiences and for conversational discovery—users can find apps by name or via contextual suggestions.
- 3
Agentkit is meant to reduce agent-building friction by combining visual workflow design, embeddable chat UI, agent-specific eval tooling, and safer data connections.
- 4
Agentkit’s live demo showed an end-to-end path from visual workflow wiring and guardrails to publishing a production agent and embedding it into a site without code changes.
- 5
CodeX is now generally available on GPT5-Codex, with new team features like Slack integration, a CodeX SDK, and enterprise admin/analytics tools.
- 6
Model updates expand capabilities for hard reasoning (GPT5 Pro), lower-cost expressive voice (GPT-REALTIME-MINI), and controllable video creation with sound (Sora 2 API preview).