OpenAI o1 and o1 pro mode in ChatGPT — 12 Days of OpenAI: Day 1
Based on OpenAI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
OpenAI is rolling out the full o1 model, described as a “think-before-responding” system designed to improve both correctness and response speed.
Briefing
ChatGPT is getting a major upgrade: OpenAI is rolling out the full o1 model—trained to “think before responding”—and launching a new ChatGPT Pro tier that adds unlimited access plus an o1 Pro mode for harder problems. The practical headline is speed and accuracy: OpenAI says o1 makes major mistakes about 34% less often than o1 preview while thinking 50% faster, aiming to fix the complaint that earlier versions could take an annoyingly long time even for simple prompts.
Alongside o1, OpenAI is introducing ChatGPT Pro at $200 per month. Pro is positioned for power users who routinely push models on math, programming, and writing, and it includes unlimited access to OpenAI’s best models (including o1 and GPT-4o) plus advanced voice mode. The standout feature is “o1 Pro mode,” which instructs the model to use more compute—so it can spend longer reasoning on the most difficult tasks. OpenAI frames the gains as sometimes subtle in isolation but meaningful inside complex workflows where every extra correct step matters. Reliability is also part of the pitch, with Pro mode presented as more dependable than standard o1 and o1 preview.
On the model side, o1 is described as multimodal and instruction-following focused, with a key technical change: it can jointly process images and text. That matters because it enables image understanding and reasoning rather than treating images as mere attachments. In a live demo, a hand-drawn diagram of a space-based data center is uploaded, and the model estimates the minimum radiator cooling panel area needed to operate a 1 GW system. The setup intentionally omits a critical parameter (cooling panel temperature), and the model is shown inferring a plausible temperature range, then proceeding with radiative heat transfer assumptions to produce an area estimate of about 2.42 million square meters—roughly 2% of San Francisco’s land area.
For the “hardest problems” lane, o1 Pro mode is demonstrated on a chemistry task: identifying a protein matching six domain-specific criteria. The workflow requires searching among many candidate proteins and verifying that each one satisfies all constraints. In the demo, the model finishes in 53 seconds, and the interface allows users to inspect the reasoning path and candidate evaluation.
OpenAI also signals broader expansion: more compute-intensive features for Pro mode are planned, along with tools like web browsing and file uploads, and eventual delivery of o1 through the API. For developers, the roadmap includes structured outputs, function calling, developer messages, and API image understanding—aimed at making o1’s multimodal reasoning and longer-form problem solving available beyond ChatGPT.
In short, the update pairs a faster, more accurate “think-first” model (o1) with a higher-compute subscription tier (ChatGPT Pro) that unlocks deeper reasoning for complex science and engineering tasks, while also improving everyday usability through quicker responses and multimodal input support.
Cornell Notes
OpenAI is launching the full o1 model in ChatGPT, positioning it as a “think-before-responding” system that is both faster and more accurate than o1 preview. OpenAI reports o1 makes major mistakes about 34% less often while thinking 50% faster, addressing complaints that earlier versions could take too long on simple prompts. The rollout also adds multimodal capability: o1 can reason over both images and text together. For power users, ChatGPT Pro ($200/month) provides unlimited access to top models and introduces o1 Pro mode, which uses more compute to tackle the hardest math, science, and programming problems. OpenAI pairs these upgrades with plans for additional tools (like browsing and file uploads) and an o1 API roadmap including structured outputs and image understanding.
What does o1 change compared with o1 preview, and why does that matter for everyday use?
How does multimodal input work in the new o1 rollout?
What was the space-data-center demo trying to test, and what result did the model produce?
What is o1 Pro mode, and how is it different from standard o1 access?
How does the chemistry example illustrate the kind of reasoning o1 Pro mode enables?
What’s next beyond the immediate ChatGPT upgrades?
Review Questions
- How do the reported changes in mistake rate and thinking speed between o1 and o1 preview affect user experience across simple vs difficult prompts?
- What specific capability enables o1 to reason over a diagram and a physics prompt together, and how was that demonstrated in the space-data-center example?
- Why does o1 Pro mode help on the chemistry task, given that none of the six criteria directly point to the correct protein?
Key Points
- 1
OpenAI is rolling out the full o1 model, described as a “think-before-responding” system designed to improve both correctness and response speed.
- 2
OpenAI reports o1 makes major mistakes about 34% less often than o1 preview while thinking about 50% faster, aiming to fix slow responses on simple prompts.
- 3
o1 now supports multimodal reasoning, processing images and text jointly rather than treating images as separate or secondary inputs.
- 4
ChatGPT Pro launches at $200/month with unlimited access to top models and advanced voice mode.
- 5
ChatGPT Pro adds o1 Pro mode, which uses more compute to improve performance on the hardest math, science, and programming problems.
- 6
A live demo showed o1 inferring a missing critical parameter in a space-cooling calculation and estimating a minimum radiator area of about 2.42 million square meters.
- 7
OpenAI plans additional tools (web browsing, file uploads) and an o1 API roadmap including structured outputs, function calling, developer messages, and API image understanding.