Get AI summaries of any video or article — Sign up free
OpenAI o1 and o1 pro mode in ChatGPT — 12 Days of OpenAI: Day 1 thumbnail

OpenAI o1 and o1 pro mode in ChatGPT — 12 Days of OpenAI: Day 1

OpenAI·
5 min read

Based on OpenAI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

OpenAI is rolling out the full o1 model, described as a “think-before-responding” system designed to improve both correctness and response speed.

Briefing

ChatGPT is getting a major upgrade: OpenAI is rolling out the full o1 model—trained to “think before responding”—and launching a new ChatGPT Pro tier that adds unlimited access plus an o1 Pro mode for harder problems. The practical headline is speed and accuracy: OpenAI says o1 makes major mistakes about 34% less often than o1 preview while thinking 50% faster, aiming to fix the complaint that earlier versions could take an annoyingly long time even for simple prompts.

Alongside o1, OpenAI is introducing ChatGPT Pro at $200 per month. Pro is positioned for power users who routinely push models on math, programming, and writing, and it includes unlimited access to OpenAI’s best models (including o1 and GPT-4o) plus advanced voice mode. The standout feature is “o1 Pro mode,” which instructs the model to use more compute—so it can spend longer reasoning on the most difficult tasks. OpenAI frames the gains as sometimes subtle in isolation but meaningful inside complex workflows where every extra correct step matters. Reliability is also part of the pitch, with Pro mode presented as more dependable than standard o1 and o1 preview.

On the model side, o1 is described as multimodal and instruction-following focused, with a key technical change: it can jointly process images and text. That matters because it enables image understanding and reasoning rather than treating images as mere attachments. In a live demo, a hand-drawn diagram of a space-based data center is uploaded, and the model estimates the minimum radiator cooling panel area needed to operate a 1 GW system. The setup intentionally omits a critical parameter (cooling panel temperature), and the model is shown inferring a plausible temperature range, then proceeding with radiative heat transfer assumptions to produce an area estimate of about 2.42 million square meters—roughly 2% of San Francisco’s land area.

For the “hardest problems” lane, o1 Pro mode is demonstrated on a chemistry task: identifying a protein matching six domain-specific criteria. The workflow requires searching among many candidate proteins and verifying that each one satisfies all constraints. In the demo, the model finishes in 53 seconds, and the interface allows users to inspect the reasoning path and candidate evaluation.

OpenAI also signals broader expansion: more compute-intensive features for Pro mode are planned, along with tools like web browsing and file uploads, and eventual delivery of o1 through the API. For developers, the roadmap includes structured outputs, function calling, developer messages, and API image understanding—aimed at making o1’s multimodal reasoning and longer-form problem solving available beyond ChatGPT.

In short, the update pairs a faster, more accurate “think-first” model (o1) with a higher-compute subscription tier (ChatGPT Pro) that unlocks deeper reasoning for complex science and engineering tasks, while also improving everyday usability through quicker responses and multimodal input support.

Cornell Notes

OpenAI is launching the full o1 model in ChatGPT, positioning it as a “think-before-responding” system that is both faster and more accurate than o1 preview. OpenAI reports o1 makes major mistakes about 34% less often while thinking 50% faster, addressing complaints that earlier versions could take too long on simple prompts. The rollout also adds multimodal capability: o1 can reason over both images and text together. For power users, ChatGPT Pro ($200/month) provides unlimited access to top models and introduces o1 Pro mode, which uses more compute to tackle the hardest math, science, and programming problems. OpenAI pairs these upgrades with plans for additional tools (like browsing and file uploads) and an o1 API roadmap including structured outputs and image understanding.

What does o1 change compared with o1 preview, and why does that matter for everyday use?

o1 is described as the first model OpenAI trained that “thinks before it responds.” In reported human evaluations, it makes major mistakes about 34% less often than o1 preview while thinking fully about 50% faster. OpenAI also ties the speed improvement to a user complaint: o1 preview could take around 10 seconds even for simple prompts like “hi.” The goal is a responsive experience for easy questions and longer reasoning only when the task is genuinely hard.

How does multimodal input work in the new o1 rollout?

o1 is presented as able to process images and text jointly, not as separate steps. In the demo, a hand-drawn diagram of a space data center is uploaded; the model then reasons about the system’s physics using the diagram plus the prompt. The example highlights image understanding tied to quantitative reasoning, including handling missing information.

What was the space-data-center demo trying to test, and what result did the model produce?

The demo tested whether the model can handle an under-specified problem. The diagram/prompt includes a 1 GW input and radiative cooling assumptions for space, but intentionally omits a critical parameter: the cooling panel temperature. The model infers a plausible temperature range (described as around room temperature) and continues the calculation. It outputs an estimated minimum radiator area of about 2.42 million square meters, framed as roughly 2% of San Francisco’s land area.

What is o1 Pro mode, and how is it different from standard o1 access?

o1 Pro mode is a special way of using o1 inside ChatGPT Pro that asks the model to use more compute to think harder on difficult problems. OpenAI positions it for tasks like hard math, science, and programming, where longer reasoning and deeper search can improve correctness. The chemistry demo shows this style of work: the model evaluates multiple candidate proteins against six criteria and then selects a match.

How does the chemistry example illustrate the kind of reasoning o1 Pro mode enables?

The chemistry task requires recalling domain knowledge and applying six specific criteria, none of which directly reveal the answer. For any single criterion, there could be dozens of candidate proteins, so the model must search and then verify that each candidate satisfies all constraints. In the demo, the model finishes in 53 seconds and identifies the protein as “retino chisen” (as shown in the transcript), with an option to view the reasoning path and candidate evaluation.

What’s next beyond the immediate ChatGPT upgrades?

OpenAI says more compute-intensive Pro features are planned, along with tools for o1 such as web browsing and file uploads. It also outlines an API roadmap: structured outputs, function calling, developer messages, and API image understanding—intended to let developers build with o1’s multimodal reasoning and longer problem-solving behavior.

Review Questions

  1. How do the reported changes in mistake rate and thinking speed between o1 and o1 preview affect user experience across simple vs difficult prompts?
  2. What specific capability enables o1 to reason over a diagram and a physics prompt together, and how was that demonstrated in the space-data-center example?
  3. Why does o1 Pro mode help on the chemistry task, given that none of the six criteria directly point to the correct protein?

Key Points

  1. 1

    OpenAI is rolling out the full o1 model, described as a “think-before-responding” system designed to improve both correctness and response speed.

  2. 2

    OpenAI reports o1 makes major mistakes about 34% less often than o1 preview while thinking about 50% faster, aiming to fix slow responses on simple prompts.

  3. 3

    o1 now supports multimodal reasoning, processing images and text jointly rather than treating images as separate or secondary inputs.

  4. 4

    ChatGPT Pro launches at $200/month with unlimited access to top models and advanced voice mode.

  5. 5

    ChatGPT Pro adds o1 Pro mode, which uses more compute to improve performance on the hardest math, science, and programming problems.

  6. 6

    A live demo showed o1 inferring a missing critical parameter in a space-cooling calculation and estimating a minimum radiator area of about 2.42 million square meters.

  7. 7

    OpenAI plans additional tools (web browsing, file uploads) and an o1 API roadmap including structured outputs, function calling, developer messages, and API image understanding.

Highlights

o1 is positioned as the first OpenAI-trained model that “thinks before responding,” with reported human-evaluation gains: 34% fewer major mistakes and 50% faster thinking than o1 preview.
ChatGPT Pro ($200/month) pairs unlimited access with o1 Pro mode, which increases compute for deeper reasoning on difficult tasks.
In the space-data-center demo, o1 handled an intentionally under-specified problem by inferring a missing cooling-panel temperature and producing an area estimate of ~2.42 million square meters.
The chemistry example for o1 Pro mode required searching among many candidate proteins and verifying six domain-specific criteria, finishing in 53 seconds in the demo.

Topics