Get AI summaries of any video or article — Sign up free
ChatGPT Just got Advanced Memory and it's Creepy... but SO COOL! thumbnail

ChatGPT Just got Advanced Memory and it's Creepy... but SO COOL!

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT Memory is being rolled out gradually to a limited set of free and Plus users, and it can make responses more personalized across chats.

Briefing

ChatGPT’s new Memory feature is rolling out to a limited slice of free and Plus users, letting the assistant remember personal details and preferences across chats—making responses feel more tailored while also introducing new privacy and “creepiness” concerns. Users can explicitly tell ChatGPT what to remember (for example, “I like concise responses” or “I have a 2-year-old dog named Oscar”), and the system can also update what it remembers over time—such as replacing an outdated note (“waiting for new glasses”) with the updated reality (“has new glasses”). The feature works with both GPT 4 and GPT 3.5, and it includes a “temporary chat” mode that behaves like incognito: it won’t appear in history, won’t use Memory, and won’t be used to train the model.

In practice, Memory management is done through a dedicated memory area where users can review, clear, and update stored items. The transcript shows quick confirmation when a memory is added (a “memory updated” icon), plus the ability to instruct the assistant to forget specific information. However, the rollout feels uneven: updates can be delayed, and sometimes new information appears to be stored without an obvious “memory updated” notification. That inconsistency becomes a key tension—users may not always know when the system is learning, which matters when the assistant starts inferring details from context.

A major demonstration centers on the assistant learning about the user’s YouTube channel. After being prompted to infer details from a channel screenshot and later encouraged to do web research, the assistant begins storing structured facts such as subscriber count, video count, themes, and even personal attributes it hadn’t been explicitly told. The transcript also describes a “creepy” moment where the assistant appears to follow instructions embedded in the channel’s “about” section—delivering a greeting and a subscription prompt as part of its stored knowledge. The result: Memory can become a form of automation, where future chats start with the assistant already aware of the user’s context and goals.

The Memory feature is positioned as a productivity upgrade for creators and frequent users. Once the assistant knows someone’s role and typical tasks, it can generate more relevant outputs—like writing YouTube descriptions tailored to the user’s audience even when the user doesn’t provide full context in the prompt. Still, the transcript raises practical questions: how large Memory can grow before it hits model limits, whether Memory will carry into Custom GPTs, and how much control users will have—especially on mobile.

Finally, the transcript notes that Memory management appears to be desktop-only for reviewing and editing stored items, while the mobile app can still create memories without showing when they’re added or removed. The overall takeaway is a tradeoff: Memory makes ChatGPT more useful and context-aware, but it also increases the need for transparency, user control, and careful consideration of what personal information gets stored—particularly as the feature inches toward more autonomous, life-managing behavior.

Cornell Notes

ChatGPT’s Memory feature is being rolled out to some free and Plus users, enabling the assistant to remember personal preferences and facts across chats. The system can store explicit instructions (like liking concise answers or owning a dog) and can update memories automatically when circumstances change (e.g., replacing “waiting for glasses” with “has new glasses”). A “temporary chat” mode avoids Memory entirely and doesn’t appear in history or train the model. The transcript also highlights friction points: memory updates can be delayed, sometimes happen without clear notifications, and Memory management appears limited to desktop for reviewing and editing. The practical upside is more tailored help—especially for creators—while the downside is reduced visibility into what’s being learned and when.

How does the Memory feature change what ChatGPT can do across separate chats?

Memory lets ChatGPT carry forward details and preferences so later responses start with context already stored. The transcript demonstrates this with explicit preferences (“remember that I like concise responses”) and personal facts (the user’s dog, and later the user’s name). It also shows lifecycle updates: after storing “waiting for new glasses,” the assistant later replaces that with “has new glasses,” so the remembered facts stay current rather than permanently stale.

What controls exist for privacy, and how does “temporary chat” behave?

Temporary chat acts like incognito. When enabled, the conversation won’t appear in history, won’t use Memory, and won’t be used to train the model. That means users can test ideas without feeding personal context into the long-term Memory system.

What does Memory management look like, and what limitations show up?

Users can view and manage stored memories in a dedicated Memory area, including clearing all memories to start fresh. The transcript also notes that memory updates may arrive on a delay and that sometimes the assistant updates memories without showing a clear “memory updated” notification. Additionally, Memory management (reviewing/editing) appears to be desktop-only; the mobile app can still create memories but doesn’t provide the same interface for managing them.

How does Memory affect creator workflows like writing YouTube descriptions?

Once the assistant learns the user’s creator context—such as that the user runs an AI-focused YouTube channel—it can generate outputs tailored to that audience. The transcript shows the assistant producing a succinct YouTube description for a “llama 3” video after being prompted to research and write, even when the user didn’t provide full YouTube-specific context in the prompt.

Why does the transcript describe Memory as “creepy,” and what examples support that?

The “creepy” feeling comes from inference and automation: the assistant stores detailed personal and channel information and may follow embedded instructions from the user’s public profile. The transcript describes a moment where the assistant appears to follow an “about” section instruction to tell the user to say hi and offer a subscription prompt—suggesting Memory can make the assistant feel more like a person who “knows” the user, not just a tool responding to the current message.

Does Memory carry into Custom GPTs, and what uncertainty remains?

The transcript suggests Memory may not carry into Custom GPTs in the tested scenario. It notes that new responses can use GPT 3.5 until a GPT 4 limit resets, and at first glance Memory didn’t appear to apply to a Custom GPT. The creator expresses disappointment because combining Custom GPTs with Memory could enable faster, more context-aware drafts, but the exact behavior remains unclear.

Review Questions

  1. What mechanisms in the transcript show that Memory can both add and replace stored information over time?
  2. What evidence suggests Memory updates may not always be transparent to users (e.g., delayed or silent updates)?
  3. How does temporary chat differ from normal chat in terms of history, Memory usage, and training?

Key Points

  1. 1

    ChatGPT Memory is being rolled out gradually to a limited set of free and Plus users, and it can make responses more personalized across chats.

  2. 2

    Users can explicitly instruct ChatGPT to remember or forget details, and the system can update outdated memories automatically (e.g., glasses appointment status).

  3. 3

    Memory works with both GPT 4 and GPT 3.5, while “temporary chat” disables Memory, hides the conversation from history, and avoids training use.

  4. 4

    Memory updates can be delayed and sometimes occur without a clear “memory updated” notification, reducing user visibility into what’s being stored.

  5. 5

    Memory management appears to be desktop-focused: the mobile app can create memories but doesn’t offer the same interface to review or edit them.

  6. 6

    Learning a user’s public creator context (like YouTube channel details) can improve task outputs such as tailored video descriptions.

  7. 7

    Unanswered questions remain around Memory size limits, how much it carries into Custom GPTs, and how much control users will have over how aggressively it remembers.

Highlights

Temporary chat functions like incognito: it won’t use Memory, won’t show up in history, and won’t be used to train the model.
A stored “waiting for new glasses” memory later gets replaced with “has new glasses,” showing Memory can update rather than only accumulate.
Memory can infer and store detailed creator facts (subscriber counts, themes, and more), enabling more tailored outputs like YouTube descriptions.
The transcript flags a transparency gap: Memory sometimes updates without clearly signaling it, and mobile lacks easy memory management controls.

Topics

Mentioned

  • Matt Pierce
  • Matthew Pierce
  • GPT-4
  • GPT-3.5