ChatGPT 5 explained in 7 minutes
Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
GPT-5 is presented as the biggest release since ChatGPT’s 2022 launch, with emphasis on routing, context length, and specialized modes.
Briefing
OpenAI’s GPT-5 is being positioned as the biggest leap since ChatGPT launched in 2022—less for flashy demos and more for how it changes day-to-day workflows: routing to the right reasoning level, handling far larger inputs, and enabling richer “modes” like study, deep research, and agent-style actions. The transcript frames GPT-5 as a model that can act like a pocket team of PhD experts, but the practical headline is capacity: GPT-5 can support up to 400,000 tokens of context, while the standard ChatGPT interface is limited to a 32K context window. That mismatch matters for anyone who wants to paste in books, large PDFs, or maintain long-running conversations without constantly trimming content.
A key operational feature is the automatic router inside ChatGPT: simpler prompts get routed to faster models, while harder questions route to more powerful options such as “thinking” mode. The transcript argues that OpenAI’s interface hides some of the consequences of that routing—especially when context limits force truncation even though GPT-5 itself can ingest much more. The workaround offered is to use GPT-5 through Vectal.AI, where GPT-5 is claimed to run with the full 400,000-token context window and can be tried for free. The pitch is not just higher limits; it’s also cost and access. The transcript claims Vectal.AI provides “unlimited GPT-5” for roughly a tenth of OpenAI’s $200/month plan.
Beyond context length, the transcript highlights GPT-5’s integration into ChatGPT’s interface options: GPT-5, “GPT-5 thinking,” and “GPT-5 Pro.” It also points to a major update that many users allegedly overlook—GPT-5’s ability to generate custom visual examples and illustrations that can be flipped through like pages, useful for presentations and content creation. In parallel, GPT-5 Pro is demonstrated with a prompt aimed at building a full-stack, interactive 3D web app in Canvas, complete with sliders controlling multiple geometric objects. The result is impressive enough to take minutes to complete, but it doesn’t fully deliver the intended Canvas output on the first attempt.
The practical advice is to steer routing deliberately. The transcript recommends using “GPT-5 thinking” rather than the default GPT-5, citing benchmark differences: a default path can land on a weaker variant with a low score (44), while explicitly selecting thinking mode yields a stronger “medium” option (68) on an Artificial Analysis benchmark. It also claims that adding phrases like “think hard” in prompts can increase how often thinking mode triggers.
For learning and research, the transcript elevates “study mode” as a fast tutoring workflow, activated via a plus icon before the prompt. It also mentions “agent mode” for small website actions, and “deep research” for browsing dozens of sites and producing detailed reports. Memory and organization features—“add to memory” and “projects” with per-project system prompts—are presented as ways to personalize outputs across conversations.
Still, the transcript ends with a reality check: even with GPT-5 improvements over GPT-4, the model struggles with complex 3D multi-object Canvas execution, producing an error. The closing suggestion is that for coding tasks, other specialized models may outperform GPT-5, with a call for a follow-up comparison against tools like Claude Code, Grok 4, and Gemini 2.5 Pro.
Cornell Notes
GPT-5 is framed as a major step up from earlier ChatGPT generations, with the biggest practical impact coming from routing, context length, and specialized modes. A central issue is that GPT-5 can handle up to 400,000 tokens, but the standard ChatGPT interface is limited to a 32K context window, which can force truncation for large documents. The transcript recommends using “GPT-5 thinking” (and prompt cues like “think hard”) to steer the router toward stronger reasoning performance, citing benchmark gaps. It also highlights “study mode” for tutoring, “deep research” for multi-site reporting, and “agent mode” for small actions. Finally, it notes that even GPT-5 Pro can hit limits on complex 3D Canvas coding tasks, suggesting other coding-focused models may be better in some cases.
What’s the biggest practical bottleneck mentioned for GPT-5 inside standard ChatGPT, and why does it matter?
How does the “automatic router” change what model actually runs, and how can users influence it?
What benchmark-based guidance is given for choosing between default GPT-5 and GPT-5 thinking?
Which ChatGPT modes are highlighted for learning and research, and how are they activated?
What workflow features help personalize outputs across conversations?
What limitation appears in the GPT-5 Pro Canvas demo, and what conclusion is drawn from it?
Review Questions
- How does the 32K context window limitation conflict with GPT-5’s claimed 400,000-token capability, and what workaround is suggested?
- What specific steps does the transcript recommend to increase the chance of using thinking mode, and why?
- Which GPT-5 modes are described as best for tutoring, multi-site research, and small automated actions—and how are they triggered?
Key Points
- 1
GPT-5 is presented as the biggest release since ChatGPT’s 2022 launch, with emphasis on routing, context length, and specialized modes.
- 2
GPT-5’s claimed 400,000-token context capability conflicts with a 32K context limit in standard ChatGPT, affecting large-document and long-chat use cases.
- 3
ChatGPT’s automatic router selects faster models for simple prompts and stronger reasoning modes for complex requests; users can steer this by choosing “GPT-5 thinking” and using cues like “think hard.”
- 4
The transcript cites an Artificial Analysis benchmark where default routing can land on a weaker 44 score, while GPT-5 thinking is described as reaching 68.
- 5
“Study mode,” “deep research,” and “agent mode” are highlighted as practical tools for learning, multi-site reporting, and small website actions.
- 6
Memory and organization features—“add to memory” and “projects” with per-project system prompts—are recommended for consistent personalization across conversations.
- 7
Even GPT-5 Pro can fail on complex 3D multi-object Canvas execution, suggesting other coding-focused models may outperform it for certain software tasks.