Get AI summaries of any video or article — Sign up free
OpenAI DevDay 2024 | Virtual AMA with Sam Altman, moderated by Harry Stebbings, 20VC thumbnail

OpenAI DevDay 2024 | Virtual AMA with Sam Altman, moderated by Harry Stebbings, 20VC

OpenAI·
5 min read

Based on OpenAI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Reasoning models are positioned as the strategic unlock for major gains in science contribution and difficult coding, with rapid improvement expected in the O-series.

Briefing

Sam Altman used a wide-ranging virtual AMA to argue that OpenAI’s next leap depends less on incremental model tweaks and more on “reasoning” systems that can drive breakthroughs in science and produce far more capable code. He framed the near-term roadmap as rapid improvement in the O-series of models, with reasoning positioned as the strategic unlock for tasks that have taken years to crack—especially when models can contribute to new scientific work and handle difficult programming with less friction.

Altman also drew a clear line for startups and investors: patching today’s model shortcomings is a risky business bet because future model generations are expected to erase many current gaps. He contrasted companies built to “benefit from the model getting better” with those built to work around a specific weakness that may soon disappear. In his view, the market for AI-enabled products will create enormous new value—he referenced a multi-trillion-dollar annual value creation claim—while OpenAI’s job is to keep models improving so developers spend less time wrestling with limitations.

On the product stack, Altman said OpenAI will likely move upward over time, but not by dismissing what developers do now. He emphasized that founders should assume OpenAI will own more of the application layer, making heavy investment in fragile workarounds less attractive. For non-technical founders, he predicted a staged path: first, tools that boost productivity for people who already code well; later, higher-quality no-code tools that can support building and scaling real businesses—though he suggested that fully “full startup” no-code creation is still far off.

Altman spent significant time on “agents,” offering a definition rooted in long-duration tasks with minimal supervision. He pushed back on the common restaurant-booking example as too small, arguing the more interesting shift is parallelism and collaboration: agents that can do hundreds of actions humans can’t coordinate, or act like a senior coworker that completes multi-day projects and returns polished work. He also suggested this could reshape pricing away from per-seat models toward compute- or workload-based approaches, though he admitted pricing mechanics remain speculative.

Open source came up as a core ecosystem question. Altman said open source models have an important place, but so do well-integrated services and APIs; the right delivery mechanism will vary by use case. He described internal thinking as supporting multiple paths rather than a single doctrine.

Beyond product, Altman addressed broader industry questions: model “depreciation” is real, but he argued the idea that models aren’t worth what they cost to train is wrong, citing revenue potential and the compounding benefits of learning to train better systems. He also highlighted culture as a differentiator—OpenAI’s ability to repeatedly execute on new, unproven research directions.

In leadership and hiring, he described the challenge of scaling focus during rapid change, and argued for an “extreme high talent bar” across ages rather than a single hiring philosophy. He ended with a five- and ten-year outlook centered on accelerating technological progress in AI and science, while society changes less dramatically than people expect—echoing how earlier predictions about major milestones often missed how quickly progress would “whoosh” past expectations.

Cornell Notes

Sam Altman emphasized that OpenAI’s most important strategic bet is reasoning models, expecting rapid improvement in the O-series to unlock long-awaited capabilities in science and difficult coding. He advised founders to avoid building businesses that merely patch current model weaknesses, since future generations are expected to remove many of today’s gaps. On the product side, he predicted a gradual path toward no-code tools for non-technical founders, starting with productivity gains for skilled coders. Altman also defined AI agents as systems that can handle long tasks with minimal supervision, and he suggested agents could eventually shift pricing from per-seat to compute- or workload-based models. Open source remains important, but he argued for a flexible ecosystem that also includes integrated APIs and services.

Why does Altman place “reasoning” at the center of OpenAI’s roadmap?

Reasoning is framed as the unlock for capabilities that have been difficult for years: contributing to new science, writing more very difficult code, and generally improving performance across high-value tasks. He ties this to strategic importance and expects rapid improvement in the O-series of models as reasoning capability advances.

What business advice did Altman give founders about building on top of model limitations?

He warned that building a company to patch a specific, narrow model shortcoming is a fragile strategy because future model generations should address many current weaknesses. Instead, he encouraged alignment with businesses that benefit from continuous model improvement—so the company’s value grows as the underlying model gets better.

How did Altman describe AI agents, and what examples did he use to illustrate the difference between “small” and “big” agent use?

He defined an agent as something that can be given a long-duration task with minimal supervision during execution. He criticized the common restaurant-reservation example as too mild, arguing the more interesting future is (1) massive parallelism—agents coordinating hundreds of actions humans can’t manage—and (2) collaboration on multi-day projects, returning high-quality work like a senior coworker.

What did Altman suggest about how agent-driven systems might change pricing?

He noted that traditional SaaS pricing often uses per-seat fees, but agents can replace or augment labor. He speculated that pricing could shift toward compute-based or workload-based models—for example, paying based on the amount of compute turned on to run an agent continuously—while admitting OpenAI doesn’t know the final answer.

What stance did Altman take on open source versus proprietary services?

He said open source models have a “really important place” in the ecosystem and that good open source models already exist. At the same time, he argued there’s also room for well-integrated services and APIs, with customers choosing what fits their needs. He described OpenAI’s approach as offering open source options and also providing delivery mechanisms like APIs/agents.

How did Altman respond to the claim that models are “depreciating assets” and that training costs may not be justified?

He agreed that models depreciate, but rejected the idea that they’re not worth what they cost to train. He pointed to revenue potential and amortization across a large user base (citing ChatGPT usage broadly), plus a positive compounding effect: learning to train better models improves the next training cycle. He also acknowledged that returns may be harder for some players if they’re behind or lack a sticky product strategy.

Review Questions

  1. Which capabilities did Altman connect most directly to reasoning models, and why does that matter for developers building AI products?
  2. What criteria did he use to distinguish “patching” businesses from businesses that should thrive as models improve?
  3. How does Altman’s definition of an agent (long tasks, minimal supervision) change the way you should think about real-world agent workflows and pricing?

Key Points

  1. 1

    Reasoning models are positioned as the strategic unlock for major gains in science contribution and difficult coding, with rapid improvement expected in the O-series.

  2. 2

    Founders should be cautious about building around today’s model shortcomings, because future generations are expected to eliminate many of those gaps.

  3. 3

    No-code tools for non-technical founders are likely to arrive in stages: first boosting productivity for strong coders, then gradually enabling higher-quality no-code business building.

  4. 4

    AI agents should be understood as long-duration, minimally supervised systems; the most valuable use cases may involve parallel actions and multi-day collaboration.

  5. 5

    Agents could eventually push pricing away from per-seat SaaS toward compute- or workload-based models, though the exact mechanism remains uncertain.

  6. 6

    Open source models have an important role, but integrated APIs and services also matter; customers will choose the delivery mechanism that fits their needs.

  7. 7

    Altman argues that despite model depreciation, training investments can be justified through revenue, amortization, and compounding improvements in training capability.

Highlights

Reasoning is treated as the key capability that can unlock breakthroughs in science and substantially better performance on very difficult coding tasks.
Altman’s startup warning: don’t build a business mainly to patch a current model weakness that future model generations are likely to fix.
His agent definition goes beyond “book a restaurant” toward long tasks with minimal supervision and workflows that humans can’t parallelize.
He suggested agent-driven systems may eventually change SaaS pricing logic, potentially shifting from per-seat fees to compute/workload-based pricing.
OpenAI’s differentiation is portrayed not just as model architecture, but as a culture capable of repeatedly executing new, unproven research directions.

Topics

Mentioned