Get AI summaries of any video or article — Sign up free
Our Future is WILD! AI Advancements that Get Me EXCITED! thumbnail

Our Future is WILD! AI Advancements that Get Me EXCITED!

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT’s @-mention feature lets users insert specific GPTs into an active conversation so specialized tools can use the full thread context.

Briefing

ChatGPT’s new “bring a GPT into the conversation” feature is a meaningful step toward AI assistants that can borrow specialized expertise on demand—though it still feels clunky in how context and discovery work. Users can type an @ mention to pull a custom GPT into an ongoing chat, letting that GPT operate with the full conversation context. In a test, a general question about snail relatives is answered normally, then “Consensus” (an AI research assistant trained on a large academic corpus) is added to verify the claim with citable studies. The workflow works well for quick fact-checking, but it has sharp edges: switching back to regular ChatGPT requires manually dismissing the inserted GPT, and the system tends to treat the added GPT as part of itself rather than a truly separate agent.

That “agent separation” issue shows up in another experiment using “Riz GPT,” a dating-focused assistant. After asking Riz GPT for advice, the user exits it and asks ChatGPT what it just told them; ChatGPT initially responds as if it were the same system. Only after explicitly correcting the context does it respond appropriately. For everyday productivity this may not matter much, but it undermines the promise of modular, multi-agent collaboration. Two other usability gaps stand out: multiple GPTs can’t be stacked at once (they don’t combine if added in sequence), and discovery is limited to recent or pinned GPTs—there’s no search across the broader GPT store for a specific capability when it isn’t already pinned.

Beyond ChatGPT, the most concrete “buildable” development is Meta’s release of Code Llama 70b, positioned as a more performant code-generation model available under the same licenses as earlier Code Llama models. The appeal is practical: code is the interface between humans and computers, and stronger open-source coding models could lower the barrier for creating new tools and fine-tuned variants. Access isn’t a simple download—users must fill out information and are subject to Meta’s discretion—but the open-source framing is central to the excitement.

On the research side, a video enhancement system described with “guided dynamic filtering” and “iterative feature refinement” targets both resolution and motion blur. In examples, low-resolution, fast-motion footage becomes clearer with motion blur “almost entirely solved,” producing sharper faces and more legible text. The presenter argues this could enable higher-quality slow motion without the usual camera tradeoffs (higher shutter speed and frame rates often reduce resolution or strain camera processing). The same theme—improving what cameras can capture—appears again in comparisons where the AI-enhanced results look closer to real footage than other baselines, especially for text and wheel detail.

Other items range from speculative to experimental: claims that large language models have detectable “neural signatures” that differ between truthful and knowingly dishonest behavior; “Morpheus -1,” a multimodal generative ultrasonic Transformer aimed at inducing lucid dreams via neurosimulation rather than word-based prompting; and the idea that future interfaces could extend beyond screens toward brain-state-driven experiences. Taken together, the thread’s throughline is clear: AI is moving from single-chat answers toward tool-using systems, better perception (video), and even direct physiological interaction—while usability and agent boundaries still lag behind the ambition.

Cornell Notes

ChatGPT now lets users insert specific GPTs into an ongoing conversation using an @ mention, enabling targeted capabilities like verification with citable sources. In tests, adding “Consensus” improved reliability by pulling linked studies, but the system often treats inserted GPTs as part of ChatGPT rather than fully separate agents, leading to context confusion. The workflow also has limitations: GPTs can’t be stacked simultaneously, and discovery is restricted to recent or pinned GPTs rather than searching the full store. Outside ChatGPT, Meta’s Code Llama 70b pushes open-source code generation, while a guided video enhancement approach claims to sharpen footage and dramatically reduce motion blur—potentially changing how slow-motion video is captured and upscaled.

How does the @-mention GPT feature change what ChatGPT can do in a single conversation?

It allows a user to bring a chosen GPT into the current thread so that GPT operates with the full conversation context. The transcript’s example starts with a general question answered in ChatGPT’s usual way, then adds “Consensus” to verify the claim. Consensus responds by pulling studies that can be linked and cited, rather than relying only on ChatGPT’s internal knowledge.

What problem appears when switching between an inserted GPT and regular ChatGPT?

The system can blur boundaries between agents. After using “Riz GPT” (a dating assistant) and then asking ChatGPT what it just said, ChatGPT initially answers as if it were the same system. Only after the user corrects the context does it respond properly, suggesting the inserted GPT’s role isn’t always treated as fully separate.

Why do the transcript’s usability complaints matter for multi-agent workflows?

Two limitations reduce the practicality of agent orchestration. First, multiple GPTs can’t be stacked at once—adding several doesn’t combine them, so users must switch sequentially or use separate chats. Second, GPT discovery is limited to recent and pinned GPTs; there’s no full-store search for a specialized GPT when it isn’t already pinned.

What makes Meta’s Code Llama 70b notable in the transcript?

It’s framed as a more performant code-generation model available under the same licenses as earlier Code Llama models. The key point is open-source accessibility, which the transcript argues could accelerate fine-tuned code tools and lower barriers for building software-related AI systems, even though access requires filling out information on Meta’s side.

What does the video enhancement system claim to improve beyond simple upscaling?

It targets both resolution and motion blur. The examples describe low-resolution, fast-motion footage becoming clearer with motion blur “almost entirely solved,” producing sharper details like faces and legible text. The transcript also links this to camera tradeoffs: AI could help achieve cleaner slow motion by reducing the need for extreme shutter-speed and frame-rate constraints.

What is Morpheus -1 aiming to do, and how is it different from typical LLM prompting?

Morpheus -1 is described as a multimodal generative ultrasonic Transformer designed to induce and stabilize lucid dreams. Instead of being prompted with words, it uses brain states and generates ultrasonic holograms for neurosimulation to push the sleeper into a lucid state while already asleep. The transcript adds training details: a 101 million parameter model trained on eight GPUs for two days.

Review Questions

  1. What evidence in the transcript suggests the inserted GPT feature sometimes fails to maintain strict separation between agents?
  2. How do the limitations on stacking GPTs and searching the GPT store affect real-world multi-tool workflows?
  3. Which two video-quality problems does the described enhancement method try to solve simultaneously, and why does that matter for slow-motion capture?

Key Points

  1. 1

    ChatGPT’s @-mention feature lets users insert specific GPTs into an active conversation so specialized tools can use the full thread context.

  2. 2

    Verification use cases can improve when a specialized GPT (like Consensus) returns information backed by citable studies rather than only internal knowledge.

  3. 3

    Inserted GPTs may not behave as fully separate agents, which can cause context confusion when switching back to standard ChatGPT.

  4. 4

    The system currently can’t stack multiple GPTs at once and lacks full-store search, limiting flexible agent orchestration.

  5. 5

    Meta’s Code Llama 70b is positioned as a stronger open-source code-generation model under the same licensing approach as earlier Code Llama releases.

  6. 6

    A guided video enhancement approach claims to increase resolution while dramatically reducing motion blur, potentially changing slow-motion quality tradeoffs.

  7. 7

    Morpheus -1 is presented as an ultrasonic, brain-state-driven system for inducing lucid dreams, distinct from text-prompted language models.

Highlights

ChatGPT’s new GPT insertion workflow can turn a generic answer into a sourced verification by adding “Consensus” mid-conversation.
The transcript’s Riz GPT test suggests the system sometimes treats inserted GPTs as part of ChatGPT, not as fully separate agents.
Meta’s Code Llama 70b is framed as a major open-source boost for code generation, with access gated by an application step.
The video enhancement method claims motion blur can be “almost entirely solved,” not just reduced through upscaling.
Morpheus -1 is described as using ultrasonic neurosimulation driven by brain states to induce lucid dreams.

Topics

  • ChatGPT GPT Insertion
  • Consensus Verification
  • Code Llama 70b
  • Video Super Resolution
  • Lucid Dream Neuromodulation

Mentioned