OpenAI DevDay 2024 | Virtual AMA with Sam Altman, moderated by Harry Stebbings, 20VC
Based on OpenAI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Reasoning models are positioned as the strategic unlock for major gains in science contribution and difficult coding, with rapid improvement expected in the O-series.
Briefing
Sam Altman used a wide-ranging virtual AMA to argue that OpenAI’s next leap depends less on incremental model tweaks and more on “reasoning” systems that can drive breakthroughs in science and produce far more capable code. He framed the near-term roadmap as rapid improvement in the O-series of models, with reasoning positioned as the strategic unlock for tasks that have taken years to crack—especially when models can contribute to new scientific work and handle difficult programming with less friction.
Altman also drew a clear line for startups and investors: patching today’s model shortcomings is a risky business bet because future model generations are expected to erase many current gaps. He contrasted companies built to “benefit from the model getting better” with those built to work around a specific weakness that may soon disappear. In his view, the market for AI-enabled products will create enormous new value—he referenced a multi-trillion-dollar annual value creation claim—while OpenAI’s job is to keep models improving so developers spend less time wrestling with limitations.
On the product stack, Altman said OpenAI will likely move upward over time, but not by dismissing what developers do now. He emphasized that founders should assume OpenAI will own more of the application layer, making heavy investment in fragile workarounds less attractive. For non-technical founders, he predicted a staged path: first, tools that boost productivity for people who already code well; later, higher-quality no-code tools that can support building and scaling real businesses—though he suggested that fully “full startup” no-code creation is still far off.
Altman spent significant time on “agents,” offering a definition rooted in long-duration tasks with minimal supervision. He pushed back on the common restaurant-booking example as too small, arguing the more interesting shift is parallelism and collaboration: agents that can do hundreds of actions humans can’t coordinate, or act like a senior coworker that completes multi-day projects and returns polished work. He also suggested this could reshape pricing away from per-seat models toward compute- or workload-based approaches, though he admitted pricing mechanics remain speculative.
Open source came up as a core ecosystem question. Altman said open source models have an important place, but so do well-integrated services and APIs; the right delivery mechanism will vary by use case. He described internal thinking as supporting multiple paths rather than a single doctrine.
Beyond product, Altman addressed broader industry questions: model “depreciation” is real, but he argued the idea that models aren’t worth what they cost to train is wrong, citing revenue potential and the compounding benefits of learning to train better systems. He also highlighted culture as a differentiator—OpenAI’s ability to repeatedly execute on new, unproven research directions.
In leadership and hiring, he described the challenge of scaling focus during rapid change, and argued for an “extreme high talent bar” across ages rather than a single hiring philosophy. He ended with a five- and ten-year outlook centered on accelerating technological progress in AI and science, while society changes less dramatically than people expect—echoing how earlier predictions about major milestones often missed how quickly progress would “whoosh” past expectations.
Cornell Notes
Sam Altman emphasized that OpenAI’s most important strategic bet is reasoning models, expecting rapid improvement in the O-series to unlock long-awaited capabilities in science and difficult coding. He advised founders to avoid building businesses that merely patch current model weaknesses, since future generations are expected to remove many of today’s gaps. On the product side, he predicted a gradual path toward no-code tools for non-technical founders, starting with productivity gains for skilled coders. Altman also defined AI agents as systems that can handle long tasks with minimal supervision, and he suggested agents could eventually shift pricing from per-seat to compute- or workload-based models. Open source remains important, but he argued for a flexible ecosystem that also includes integrated APIs and services.
Why does Altman place “reasoning” at the center of OpenAI’s roadmap?
What business advice did Altman give founders about building on top of model limitations?
How did Altman describe AI agents, and what examples did he use to illustrate the difference between “small” and “big” agent use?
What did Altman suggest about how agent-driven systems might change pricing?
What stance did Altman take on open source versus proprietary services?
How did Altman respond to the claim that models are “depreciating assets” and that training costs may not be justified?
Review Questions
- Which capabilities did Altman connect most directly to reasoning models, and why does that matter for developers building AI products?
- What criteria did he use to distinguish “patching” businesses from businesses that should thrive as models improve?
- How does Altman’s definition of an agent (long tasks, minimal supervision) change the way you should think about real-world agent workflows and pricing?
Key Points
- 1
Reasoning models are positioned as the strategic unlock for major gains in science contribution and difficult coding, with rapid improvement expected in the O-series.
- 2
Founders should be cautious about building around today’s model shortcomings, because future generations are expected to eliminate many of those gaps.
- 3
No-code tools for non-technical founders are likely to arrive in stages: first boosting productivity for strong coders, then gradually enabling higher-quality no-code business building.
- 4
AI agents should be understood as long-duration, minimally supervised systems; the most valuable use cases may involve parallel actions and multi-day collaboration.
- 5
Agents could eventually push pricing away from per-seat SaaS toward compute- or workload-based models, though the exact mechanism remains uncertain.
- 6
Open source models have an important role, but integrated APIs and services also matter; customers will choose the delivery mechanism that fits their needs.
- 7
Altman argues that despite model depreciation, training investments can be justified through revenue, amortization, and compounding improvements in training capability.