Get AI summaries of any video or article — Sign up free
Open Source GPT-4 Models Around the Corner - Will Open AI Release GPT-5? thumbnail

Open Source GPT-4 Models Around the Corner - Will Open AI Release GPT-5?

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

“GPT 4.5 turbo” chatter grew from alleged labeling differences across ChatGPT platforms, but the GPT-4.5 leak narrative was publicly dismissed and the behavior reportedly wasn’t consistently reproducible.

Briefing

Rumors about GPT-4.5 are colliding with a fast-moving open-source push—suggesting the gap between closed and open AI models could narrow sharply by 2024. Early chatter began with alleged “GPT 4.5 leaks,” then intensified when ChatGPT started reportedly referring to itself as “GPT 4.5 turbo,” including claims of different behavior between the web version and the Android app. The CEO of OpenAI later publicly dismissed the leak as not legit, and attempts to reproduce the “GPT 4.5 turbo” behavior reportedly failed, with the most plausible explanation offered being a simple backend prompt/name mismatch that was later corrected.

Even with uncertainty around GPT-4.5, the broader market pressure is clear: open-source developers are making credible performance claims with smaller models that can run locally. OpenChat introduced “OpenChat 3.5,” billed as a “world’s best open source 7B” model, emphasizing coding performance and showing benchmark results where it lands on par with or slightly above free ChatGPT in some tests, while also outperforming Grok in others. The pitch is practical as much as technical—7 billion parameters means faster local inference, and the model is positioned as fully open source, with user controls like temperature and system prompts.

Mistral AI’s CEO added to the momentum by saying Mistral will release an open-source “GPT-4 level” model in 2024. That claim, paired with OpenChat’s results and the broader ecosystem of models built on Mistral-style architectures, frames a looming competitive deadline: if open-source models keep closing the quality gap, OpenAI may need a major step beyond GPT-4 to stay ahead.

Hardware and multimodal tools are also accelerating the pace. A new “AI-first” server approach is described as burning Transformer architecture into chips to optimize inference, with claims that such systems can deliver far more throughput than conventional GPU setups for Transformer workloads—potentially lowering cost and enabling larger models at similar speeds. On the software side, Microsoft teamed up with Suno AI so Bing Chat and Microsoft Copilot can generate full songs from prompts, expanding access through a Copilot plugin (availability appears limited at first). Stable Audio, from Stability AI, released a new beta model for Pro users, with outputs around 45 seconds and longer generations promised.

Finally, video generation is getting more usable. Domo AI is highlighted for style-changing “video-to-video” transformations, with examples spanning anime and pixel-art aesthetics that keep facial and motion consistency better than typical style transfer. The overall takeaway is a market where model quality, local deployment, and new creative workflows are advancing simultaneously—while the next OpenAI milestone (GPT-5 versus GPT-4.5) remains the biggest unanswered question, now under sharper pressure from open-source competitors and AI-first infrastructure.

Cornell Notes

ChatGPT’s reported “GPT 4.5 turbo” labeling sparked speculation, but the GPT-4.5 leak narrative was publicly dismissed and the behavior reportedly couldn’t be reliably reproduced. Meanwhile, open-source models are making performance claims that matter because they run on modest hardware: OpenChat 3.5 is a 7B model focused on coding, showing competitive benchmark results and offering full open-source access. Mistral AI’s CEO also promised an open-source “GPT-4 level” model in 2024, raising the stakes for OpenAI’s next release. Beyond text models, AI-first hardware for Transformer inference, music generation via Suno AI in Microsoft Copilot, and video style transfer via Domo AI all point to faster, cheaper, and more creative AI workflows.

Why did “GPT 4.5 turbo” become a big deal, and what evidence undermined it?

The label allegedly appeared in ChatGPT, with claims that the web and Android apps reported different model names. That fueled ideas that GPT-4.5 was already running in the background. However, the GPT-4.5 leak was later dismissed publicly by OpenAI’s CEO, and the “GPT 4.5 turbo” behavior reportedly wasn’t reproducible—suggesting it may have been a corrected backend prompt/name issue rather than a new model rollout.

What makes OpenChat 3.5’s pitch different from many open-source releases?

It targets practical deployment and a specific strength: coding performance. The model is “OpenChat 3.5” at 7 billion parameters, which is small enough to run locally on many machines and fast enough to be usable. Benchmark comparisons are presented as competitive—on some tests it matches or slightly beats free ChatGPT 3.5, and it’s shown as beating Grok in human evaluation and several benchmark categories.

How does Mistral AI’s 2024 promise change the competitive landscape?

Mistral AI’s CEO said Mistral will release an open-source GPT-4-level model in 2024. Combined with OpenChat’s results and the ecosystem of models built on Mistral architectures, the implication is that open-source quality could rise quickly. That increases pressure on OpenAI to deliver a substantial upgrade (potentially GPT-5) rather than a minor iteration.

What does “AI-first hardware” mean here, and why does it matter for inference?

Instead of relying on general-purpose GPUs, the described approach hardwires Transformer inference into chips/servers. The claim is that this specialization improves throughput and reduces cost for Transformer workloads, making it feasible to run larger models at similar speeds. The software stack is described as open source, which could speed adoption.

How are music and audio tools expanding access across major platforms?

Microsoft partnered with Suno AI so Bing Chat and Microsoft Copilot can generate songs from prompts via a Copilot plugin. Access appears staged—some users don’t see the plugin yet. Stability AI’s Stable Audio also released a beta model, but it’s limited to Pro users and currently outputs shorter clips (~45 seconds), with longer generation expected.

What’s notable about Domo AI’s video style transfer examples?

Domo AI is presented as changing the artistic style of existing videos while keeping motion and identity more consistent than typical style transfer. Examples include anime styles with consistent facial features and hair motion, plus pixel-art transformations that are described as unusually convincing for AI video-to-video, even if artifacts and imperfect pixel fidelity remain.

Review Questions

  1. What alternative explanation is offered for “GPT 4.5 turbo” appearing in ChatGPT without confirming a real model upgrade?
  2. Which open-source model is highlighted as a 7B coding-focused option, and what benchmark comparisons are used to support its performance claim?
  3. How do AI-first Transformer-focused chips change the cost/speed equation compared with running inference on standard GPUs?

Key Points

  1. 1

    “GPT 4.5 turbo” chatter grew from alleged labeling differences across ChatGPT platforms, but the GPT-4.5 leak narrative was publicly dismissed and the behavior reportedly wasn’t consistently reproducible.

  2. 2

    OpenChat 3.5 is positioned as a fully open-source 7B model focused on coding performance, with benchmark comparisons suggesting it can match or beat free ChatGPT 3.5 on some tests and outperform Grok in others.

  3. 3

    Mistral AI’s CEO claims Mistral will release an open-source “GPT-4 level” model in 2024, intensifying pressure on OpenAI’s next major release.

  4. 4

    AI-first hardware approaches hardwire Transformer inference into chips/servers to improve speed and reduce cost, potentially enabling larger models at similar throughput.

  5. 5

    Microsoft’s Suno AI integration brings prompt-to-song generation into Bing Chat and Microsoft Copilot via a plugin, while Stable Audio’s beta is limited to Pro users and currently generates shorter clips.

  6. 6

    Domo AI is highlighted for video-to-video style changes (anime, pixel art) that keep motion and facial consistency more convincingly than earlier style transfer methods.

Highlights

ChatGPT’s “GPT 4.5 turbo” labeling sparked speculation, but the GPT-4.5 leak was dismissed and the effect reportedly couldn’t be reliably reproduced—pointing toward a likely backend naming/prompt issue rather than a confirmed model swap.
OpenChat 3.5 markets a 7B open-source model as “world’s best” for coding, emphasizing local, fast deployment alongside benchmark competitiveness.
Mistral AI’s promise of an open-source “GPT-4 level” model in 2024 raises the stakes for OpenAI’s next step.
AI-first Transformer inference hardware is framed as a cost-and-speed unlock, potentially changing what can be run efficiently.
Domo AI’s video style transfer examples—especially anime and pixel art—are presented as unusually consistent for real-world video transformations.

Topics