Our Future is WILD! AI Advancements that Get Me EXCITED!
Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
ChatGPT’s @-mention feature lets users insert specific GPTs into an active conversation so specialized tools can use the full thread context.
Briefing
ChatGPT’s new “bring a GPT into the conversation” feature is a meaningful step toward AI assistants that can borrow specialized expertise on demand—though it still feels clunky in how context and discovery work. Users can type an @ mention to pull a custom GPT into an ongoing chat, letting that GPT operate with the full conversation context. In a test, a general question about snail relatives is answered normally, then “Consensus” (an AI research assistant trained on a large academic corpus) is added to verify the claim with citable studies. The workflow works well for quick fact-checking, but it has sharp edges: switching back to regular ChatGPT requires manually dismissing the inserted GPT, and the system tends to treat the added GPT as part of itself rather than a truly separate agent.
That “agent separation” issue shows up in another experiment using “Riz GPT,” a dating-focused assistant. After asking Riz GPT for advice, the user exits it and asks ChatGPT what it just told them; ChatGPT initially responds as if it were the same system. Only after explicitly correcting the context does it respond appropriately. For everyday productivity this may not matter much, but it undermines the promise of modular, multi-agent collaboration. Two other usability gaps stand out: multiple GPTs can’t be stacked at once (they don’t combine if added in sequence), and discovery is limited to recent or pinned GPTs—there’s no search across the broader GPT store for a specific capability when it isn’t already pinned.
Beyond ChatGPT, the most concrete “buildable” development is Meta’s release of Code Llama 70b, positioned as a more performant code-generation model available under the same licenses as earlier Code Llama models. The appeal is practical: code is the interface between humans and computers, and stronger open-source coding models could lower the barrier for creating new tools and fine-tuned variants. Access isn’t a simple download—users must fill out information and are subject to Meta’s discretion—but the open-source framing is central to the excitement.
On the research side, a video enhancement system described with “guided dynamic filtering” and “iterative feature refinement” targets both resolution and motion blur. In examples, low-resolution, fast-motion footage becomes clearer with motion blur “almost entirely solved,” producing sharper faces and more legible text. The presenter argues this could enable higher-quality slow motion without the usual camera tradeoffs (higher shutter speed and frame rates often reduce resolution or strain camera processing). The same theme—improving what cameras can capture—appears again in comparisons where the AI-enhanced results look closer to real footage than other baselines, especially for text and wheel detail.
Other items range from speculative to experimental: claims that large language models have detectable “neural signatures” that differ between truthful and knowingly dishonest behavior; “Morpheus -1,” a multimodal generative ultrasonic Transformer aimed at inducing lucid dreams via neurosimulation rather than word-based prompting; and the idea that future interfaces could extend beyond screens toward brain-state-driven experiences. Taken together, the thread’s throughline is clear: AI is moving from single-chat answers toward tool-using systems, better perception (video), and even direct physiological interaction—while usability and agent boundaries still lag behind the ambition.
Cornell Notes
ChatGPT now lets users insert specific GPTs into an ongoing conversation using an @ mention, enabling targeted capabilities like verification with citable sources. In tests, adding “Consensus” improved reliability by pulling linked studies, but the system often treats inserted GPTs as part of ChatGPT rather than fully separate agents, leading to context confusion. The workflow also has limitations: GPTs can’t be stacked simultaneously, and discovery is restricted to recent or pinned GPTs rather than searching the full store. Outside ChatGPT, Meta’s Code Llama 70b pushes open-source code generation, while a guided video enhancement approach claims to sharpen footage and dramatically reduce motion blur—potentially changing how slow-motion video is captured and upscaled.
How does the @-mention GPT feature change what ChatGPT can do in a single conversation?
What problem appears when switching between an inserted GPT and regular ChatGPT?
Why do the transcript’s usability complaints matter for multi-agent workflows?
What makes Meta’s Code Llama 70b notable in the transcript?
What does the video enhancement system claim to improve beyond simple upscaling?
What is Morpheus -1 aiming to do, and how is it different from typical LLM prompting?
Review Questions
- What evidence in the transcript suggests the inserted GPT feature sometimes fails to maintain strict separation between agents?
- How do the limitations on stacking GPTs and searching the GPT store affect real-world multi-tool workflows?
- Which two video-quality problems does the described enhancement method try to solve simultaneously, and why does that matter for slow-motion capture?
Key Points
- 1
ChatGPT’s @-mention feature lets users insert specific GPTs into an active conversation so specialized tools can use the full thread context.
- 2
Verification use cases can improve when a specialized GPT (like Consensus) returns information backed by citable studies rather than only internal knowledge.
- 3
Inserted GPTs may not behave as fully separate agents, which can cause context confusion when switching back to standard ChatGPT.
- 4
The system currently can’t stack multiple GPTs at once and lacks full-store search, limiting flexible agent orchestration.
- 5
Meta’s Code Llama 70b is positioned as a stronger open-source code-generation model under the same licensing approach as earlier Code Llama releases.
- 6
A guided video enhancement approach claims to increase resolution while dramatically reducing motion blur, potentially changing slow-motion quality tradeoffs.
- 7
Morpheus -1 is presented as an ultrasonic, brain-state-driven system for inducing lucid dreams, distinct from text-prompted language models.