OpenAI Flip-Flops and '10% Chance of Outperforming Humans in Every Task by 2027' - 3K AI Researchers
Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
OpenAI’s GPT Store compensation model reportedly pays builders based on user engagement, incentivizing longer and more frequent use.
Briefing
OpenAI’s GPT Store is moving toward a business model that pays builders based on user engagement—an incentive structure that risks pushing AI assistants toward addictive, “screen-time” behavior even though OpenAI previously signaled it was short on GPUs and preferred less usage. The announcement says builders will be paid “based on user engagement with their gpts,” effectively rewarding whoever keeps users interacting longer. The tension is sharpened by a broader market shift: Character.AI’s addictive chatbot experience has reportedly narrowed the gap with ChatGPT in U.S. monthly active users, and competitors like Inflection AI’s Pi and celebrity-style character bots are designed to keep people engaged.
That engagement-first logic also collides with what many users actually experience inside the Store. Multiple custom GPTs can underperform on basic tasks—one example repeatedly failed at producing accurate word counts even when it claimed it could. Still, there are standout exceptions: a “consensus” GPT that returns relevant links for scientific research was described as genuinely more useful than GPT-4 for that specific workflow. In parallel, OpenAI’s rollout of GPT Store features is accompanied by a separate, more consequential update: GPTs learning from chats via persistent memory. A leaked announcement tied to Greg Brockman described GPTs that remember details and preferences between conversations, with options to reset or disable memory. The implication is a move from generic assistants toward personalized systems that can feel more like a “friend,” especially as avatar-based interfaces arrive.
The personalization push feeds into a larger internal debate about what OpenAI is building and why. OpenAI leadership has framed its end goal as superintelligence that will outperform humans at economically valuable work, and it has acknowledged that automation will replace human labor. But Andrej Karpathy has publicly argued for “intelligence amplification”—AI as a tool that empowers most people rather than a replacement for humans. The contradiction is hard to reconcile: if models become more capable, more independent, and more agentic, they can both improve as tools and displace jobs. The transcript highlights the lack of a clear dividing line between “tool” and “replacement,” especially as OpenAI’s own safety roadmap leans on automating parts of research.
That safety-and-timeline tension is echoed by a new survey of 2,778 AI researchers presented as a 38-page paper. Respondents estimated a 10% chance that “uned machines” could outperform humans in every task by 2027, rising to 50% by 2047 for “high-level machine intelligence” (defined as outperforming humans at every task and more cheaply). Yet the same survey places full automation of all human labor in the 2100s, creating a stark gap between capability and adoption/production. The survey also finds that question wording materially changes results, anchoring effects shape timelines, and many researchers believe an “intelligence explosion” feedback loop could accelerate progress dramatically within five years.
Finally, the transcript links OpenAI’s strategy to the economics of information. OpenAI has struck publishing deals with Axel Springer and is reportedly in talks with major outlets including CNN, Fox, and Time. The reported payments—up to $5 million annually for publishers, and potentially far more for broader rights—raise questions about how OpenAI’s wealth-redistribution vision for a future dominated by AGI would square with sustaining independent journalism. Against that backdrop, the survey’s forecasts and OpenAI’s shifting messaging on engagement, memory, and labor replacement converge into a single theme: incentives and timelines are moving faster than definitions of safety, tools, and societal impact.
Cornell Notes
OpenAI’s GPT Store is set up to pay builders based on user engagement, which could reward addictive behavior and intensify competition for “time spent” rather than purely usefulness. Alongside that, GPTs are moving toward persistent memory—learning preferences and details across chats—making assistants more personalized and potentially more agentic. At the same time, OpenAI’s public messaging about superintelligence and labor replacement is being challenged by arguments for “intelligence amplification” rather than human replacement. A large survey of 2,778 AI researchers estimates a 10% chance of machines outperforming humans in every task by 2027, but places full automation of all jobs much later, revealing a major gap between capability and labor-market adoption. The survey also shows that small changes in how questions are framed can swing predictions, and many researchers worry about rapid acceleration effects like a feedback-driven “proto-singularity.”
Why does paying GPT builders based on engagement matter beyond business incentives?
What does persistent GPT memory change about how assistants behave?
How does the transcript connect OpenAI’s “superintelligence” goal to labor replacement debates?
What is the key inconsistency in the 2,778-researcher survey timelines?
Why do survey results vary so much in the paper?
What does the survey say about “intelligence explosion” dynamics?
Review Questions
- How does engagement-based payment for GPT builders change incentives compared with a usefulness-first model?
- What does persistent memory imply for personalization, user dependence, and the “tool vs replacement” debate?
- Why might a survey predict fast capability (outperforming humans) but much slower full labor automation?
Key Points
- 1
OpenAI’s GPT Store compensation model reportedly pays builders based on user engagement, incentivizing longer and more frequent use.
- 2
Persistent memory features would let GPTs carry preferences and details across chats, increasing personalization and potential user lock-in.
- 3
OpenAI’s superintelligence and labor-replacement messaging conflicts with calls for intelligence amplification that frames AI as empowering tools rather than substitutes.
- 4
A survey of 2,778 AI researchers estimates a 10% chance of outperforming humans in every task by 2027, but places full automation of all jobs in the 2100s, highlighting a capability-versus-adoption gap.
- 5
Question framing and anchoring effects significantly shift AI timeline and risk estimates, meaning survey wording can materially change conclusions.
- 6
Many researchers believe rapid acceleration dynamics (“intelligence explosion”) could occur within five years, with a majority viewing it as plausible.
- 7
Publishing deals and licensing talks raise questions about how AI-driven wealth and content rights could affect the sustainability of independent journalism.