Get AI summaries of any video or article — Sign up free
Hands On Testing! Open AI's New "GPTs" & ChatGPT Update! thumbnail

Hands On Testing! Open AI's New "GPTs" & ChatGPT Update!

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT’s update replaces the old GPT 3.5/GPT 4 switching bar with a simpler dropdown toggle and automatic model switching for use cases.

Briefing

ChatGPT’s latest update streamlines model switching and folds more multimodal tools into a single workflow—while OpenAI’s new GPTs feature promises a wave of purpose-built assistants. The most immediate change is interface-level: users no longer juggle a GPT 3.5 vs GPT 4 selector bar. Instead, a simple dropdown toggle lets ChatGPT automatically switch models for different tasks, including the newly highlighted GPT-4 Turbo (described as the “turbo 128k” variant). Early hands-on testing suggests GPT-4 Turbo is faster than the older GPT-4 experience, though heavy demand can still slow responses inside ChatGPT.

Beyond speed, the update makes prompt iteration and sourcing easier. Hovering over prior messages reveals a drawing/editing icon that allows users to retype and resubmit prompts without starting fresh chats—useful when refining wording. When responses include citations, hovering surfaces the source site’s logo and provides direct links, keeping web research and answer context in the same place. The workflow also supports “fact grounding” behavior: when asked for goldfish facts, the assistant begins by searching the web, and users can adjust instructions (e.g., requesting memory-only rather than web search).

Multimodal capabilities are treated as first-class features rather than separate experiments. Image generation with DALL·E is integrated directly into ChatGPT, including a workflow where users upload a photo and ask for a DALL·E recreation. During testing, DALL·E generation sometimes fails or stalls under load, but regeneration typically succeeds—producing an example lemon character in a pop-art style. File uploads also work inside the chat: a user uploads a document (a Wikipedia page about Squidward Tentacles), and a custom GPT reads and summarizes it, then applies that content to tasks like negotiation strategies.

OpenAI’s bigger platform shift is “GPTs,” a beta feature for creating custom versions of ChatGPT tailored to specific purposes. The account area includes “My GPTs,” plus a “Create a GPT” option (beta, with availability described as coming in the coming weeks). OpenAI also provides a set of pre-made GPTs—some framed as legacy-style (e.g., DALL·E, data analysis, and ChatGPT classic) and others as specialized assistants such as Game Time, The Negotiator, Creative Writing, Tech Support Advisor, Coloring Book Hero, and a meme-focused helper. The transcript emphasizes that the real value may arrive when the community can publish and share GPTs through a future store.

Hands-on tests show how these GPTs behave differently: Game Time explains Monopoly for a child; The Negotiator generates a character-specific negotiation plan using uploaded Squidward material; Coloring Book Hero turns ideas into stylized images and can transform an uploaded image into a coloring-book-like version (with at least one request denied). The Tech Support Advisor offers practical troubleshooting steps for an XLR microphone noise issue, including checks for cable connections, gain levels, phantom power, electrical interference, and potential ground-loop causes. Overall, the update pairs a simpler interface with faster GPT-4 Turbo performance and a growing ecosystem of specialized assistants—positioning ChatGPT as a more modular toolset rather than a single generic chatbot.

Cornell Notes

ChatGPT’s update makes model switching simpler and brings more multimodal features into one place. GPT-4 Turbo is highlighted as faster than older GPT-4, though response times can still degrade under heavy demand. Users can edit and resubmit prompts from prior messages, and citations are easier to inspect with direct source links. Integrated DALL·E image generation works alongside vision and file uploads, enabling tasks like recreating an uploaded photo in a new style and summarizing documents inside the same chat. The new GPTs beta adds purpose-built assistants (e.g., Game Time, The Negotiator, Tech Support Advisor), with the expectation that community-made GPTs and a future store will unlock the most useful variations.

What changed in ChatGPT’s interface that affects day-to-day use?

Model selection is simplified: instead of a bar for switching between GPT 3.5 and multiple GPT-4 options, there’s a dropdown toggle between GPT 3.5 and GPT 4. The system can automatically switch models for different use cases. Prompt iteration also improves: hovering over a previous message reveals a drawing/edit icon that lets the user retype and resubmit the prompt without starting a new chat.

How does GPT-4 Turbo performance compare to older GPT-4 in the transcript’s testing?

GPT-4 Turbo is described as generating faster than the older GPT-4 model, with an example story request completing quickly. However, the transcript notes that overall speed inside ChatGPT can still suffer when servers are hammered right after major releases. A separate test in the OpenAI Playground is described as very fast, suggesting prioritization for the playground/API environment.

What evidence is shown that web citations and prompt grounding are more usable now?

When asked for goldfish facts, the assistant begins by searching the web. For cited answers, hovering over the annotated text reveals a small logo of the website the link came from, and the user can click through to the source. The workflow keeps research context and the generated response in the same interface, reducing the need to juggle separate chats.

How are DALL·E image generation and vision integrated into the updated ChatGPT workflow?

DALL·E generation is available directly inside ChatGPT. One test uploads a photo and asks to recreate it with DALL·E, using GPT’s vision capabilities to understand the image and then generating a new image in the same chat. Another test uses citations/fact selection (“take fact number five”) to generate an image. Under load, generation can error or stall, but regeneration typically succeeds.

What is the GPTs feature, and how do the pre-made GPTs differ from one another?

GPTs are custom versions of ChatGPT tailored to specific purposes, available in beta with “Create a GPT” described as coming in the coming weeks. Pre-made GPTs include both legacy-style options (DALL·E, data analysis, ChatGPT classic) and specialized assistants like Game Time (explains games like Monopoly), The Negotiator (creates character-specific negotiation tactics using uploaded documents), Coloring Book Hero (turns ideas into coloring-book style images), and Tech Support Advisor (gives troubleshooting steps for issues like XLR microphone noise).

How does document upload change what a GPT can do?

Uploaded files can be used as knowledge sources inside a GPT. In the transcript, a Wikipedia document about Squidward Tentacles is uploaded, and The Negotiator uses that content to produce negotiation strategy tailored to Squidward’s traits (e.g., his love for the arts, short temper, and preference for logic and flattery).

Review Questions

  1. How does the updated ChatGPT interface make it easier to refine prompts without starting new chats?
  2. What practical troubleshooting steps does the Tech Support Advisor suggest for an XLR microphone noise problem?
  3. Why might community-created GPTs be more valuable than the pre-made GPTs listed in the transcript?

Key Points

  1. 1

    ChatGPT’s update replaces the old GPT 3.5/GPT 4 switching bar with a simpler dropdown toggle and automatic model switching for use cases.

  2. 2

    GPT-4 Turbo is positioned as faster than older GPT-4, but response speed can still drop during periods of high demand.

  3. 3

    Hover-based editing lets users retype and resubmit prompts from earlier messages, supporting iterative prompt refinement.

  4. 4

    Citations now surface source-site logos on hover with direct clickable links, keeping web research and answers in one workflow.

  5. 5

    DALL·E image generation is integrated into ChatGPT, including vision-based workflows that use uploaded images to guide new generations.

  6. 6

    File uploads feed into custom GPTs, enabling document-grounded tasks like character-specific negotiation strategies.

  7. 7

    The GPTs beta introduces purpose-built assistants (e.g., Game Time, The Negotiator, Tech Support Advisor), with a future community store expected to expand usefulness.

Highlights

Model switching is simplified: a dropdown toggle replaces the older GPT 3.5 vs GPT 4 bar, and ChatGPT can automatically switch models for tasks.
DALL·E generation is integrated directly into ChatGPT, but heavy traffic can cause errors or loading delays—regeneration usually resolves it.
Custom GPTs can read uploaded documents and apply that content to specialized tasks, demonstrated with a Squidward negotiation strategy grounded in an uploaded Wikipedia page.
The pre-made GPT lineup blends legacy-style tools (DALL·E, data analysis) with specialized assistants like Game Time and Tech Support Advisor, hinting at a larger ecosystem once the community store arrives.

Topics

Mentioned

  • GPT
  • GPTs
  • API