Get AI summaries of any video or article — Sign up free
How ChatGPT Slowly Destroys Your Brain thumbnail

How ChatGPT Slowly Destroys Your Brain

Justin Sung·
5 min read

Based on Justin Sung's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The MIT EEG study reported lower brain activity, weaker connectivity, and reduced engagement for people writing with an LLM compared with search-only and brain-only groups.

Briefing

ChatGPT and other large language models can weaken learning by nudging people into “cognitive bypassing”—skipping the mental effort that normally turns information into memory and expertise. A new MIT paper, “Your Brain on Chat GPT,” reports measurable brain differences and worse learning outcomes for users who rely on an LLM compared with people who either use search without AI or learn unaided. The findings matter because they suggest the damage may linger even after AI use stops, turning a convenience tool into a long-term liability for skill-building.

In the MIT study, participants were split into three groups: one wrote essays using only an LLM (described as a large language model such as ChatGPT, DeepSeek, or Gemini), another could use any website but not AI, and a third relied on their own knowledge. EEG monitoring showed the LLM group had significantly lower brain activity, weaker connectivity, and reduced engagement. Performance tests backed those signals: the LLM group had poorer information recall and produced lower-quality, more generic essays.

The most concerning result came after the AI was removed. Even when the LLM group stopped using the tool, their EEG patterns did not bounce back to match the other groups, implying a residual negative effect from prior reliance. The broader takeaway drawn from this and other emerging research is that heavier AI use correlates with lower critical thinking and weaker learning ability—so the threat may not be job displacement by AI, but people becoming less capable at the human skills employers still require.

The argument then shifts from correlation to mechanism. Learning, as described here, depends on “information processing”: effortful mental work that organizes incoming information, compares it to existing knowledge, and builds a schema—an internal structure that supports memory and deep expertise. Traditional sources like books, lectures, and search force that processing because the information arrives in forms that must be actively organized. Chatbots, by contrast, can make answers feel easier and more intuitive, encouraging users to accept the output without doing the hard work. That creates an “illusion of learning,” where comprehension of an explanation is mistaken for retention and usable understanding.

The harm compounds in two ways. First, repeated bypassing prevents the brain from developing the habit of processing difficult material; when a new topic becomes overwhelming, the person remains dependent on AI to reduce the burden. Second, LLMs can hallucinate—generating plausible text that may be false. Without domain expertise, users often cannot detect errors, meaning they may internalize incorrect information while believing they are learning.

A key irony is that AI can make expertise more visible. Generic answers are easy to obtain, but deeper, contextual problem-solving still requires real understanding—especially when questions are nuanced or when public knowledge is incomplete. The transcript illustrates this with a programmer/data scientist who spent weeks prompting ChatGPT about building a dashboard, yet struggled to articulate his own approach until a discussion forced him to organize the strategy in his head.

The proposed solution is not anti-AI. Instead, AI should act as an assistant that saves time on non-core steps: getting a high-level overview, surfacing perspectives, or helping locate resources. The learning “effort” should remain with the user—challenging gaps, interrogating inconsistencies, and then moving to textbooks or journal articles to build durable expertise. The central warning is straightforward: if someone tries to escape mental effort, they may escape the very training needed to stay competitive as AI raises expectations in the workplace.

Cornell Notes

Reliance on ChatGPT-style tools can reduce the mental effort required for learning, leading to weaker memory, lower engagement, and more generic output. An MIT EEG study (“Your Brain on Chat GPT”) compared LLM-only writing, search-only writing, and brain-only writing; the LLM group showed lower brain activity/connectivity and worse recall and essay quality. Even after stopping AI, EEG measures did not fully return to baseline, suggesting lingering effects. The mechanism offered is “cognitive bypassing” and an “illusion of learning”: users may understand explanations without doing the processing that builds schemas and expertise. Because LLMs can hallucinate, overreliance can also embed errors when users lack the expertise to verify answers.

What did the MIT EEG study find when people used an LLM to write essays?

Participants were divided into three groups: an LLM-only group (using tools like ChatGPT/DeepSeek/Gemini), a search-engine group (any websites allowed, but AI forbidden), and a brain-only group. EEG results showed the LLM group had significantly lower brain activity, weaker brain connectivity, and lower engagement. On learning/performance measures, the LLM group had worse information recall and produced poorer-quality, more generic essays.

Why is the study’s “residual effect” result especially important?

After the LLM group stopped using AI, their EEG levels still did not return to match the other groups. That implies the negative impact of AI reliance may persist beyond the period of active use, not just during the interaction with the model.

What learning mechanism is described as being bypassed by chatbot use?

Learning is framed as requiring effortful information processing: organizing incoming information, comparing it to existing knowledge, connecting ideas to a big picture, and building a schema. When AI provides organized explanations, users can skip that effortful middle step. The result is an “illusion of learning,” where understanding what was read or generated feels like learning, even though memory and usable expertise may not form.

How does hallucination increase risk for learners?

Hallucination means LLMs generate text that may be untrue. If a learner lacks domain expertise, they may not know what to verify, so incorrect information can be accepted as correct. The transcript claims that after deep learning or complex problem-solving with an LLM, it’s “almost certain” some learned content is wrong because the model isn’t grounded in a universal truth.

What does the transcript suggest is a safer way to use AI while still building expertise?

Use AI as an assistant for tasks that don’t replace core processing: request big-picture summaries, find resources, or use it as a sounding board for perspectives. Then do the hard part—interrogate gaps and inconsistencies, and follow up with deeper study (journal articles, textbooks) to build ownership and retention in the brain.

Why does the transcript argue AI can make expertise more obvious rather than less important?

AI often produces middle-of-the-road, mainstream answers quickly. When someone lacks expertise, they may get generic outputs that don’t solve nuanced, contextual problems. When someone has expertise, they can craft better questions, check answers, and push for higher-quality results—so the difference between real understanding and surface-level output becomes clearer.

Review Questions

  1. How does “cognitive bypassing” differ from simply using AI to save time?
  2. What evidence from the MIT EEG study supports claims about engagement and memory differences between LLM users and non-LLM users?
  3. Why does hallucination pose a bigger threat to novices than to experts in the transcript’s framework?

Key Points

  1. 1

    The MIT EEG study reported lower brain activity, weaker connectivity, and reduced engagement for people writing with an LLM compared with search-only and brain-only groups.

  2. 2

    LLM-only users showed worse information recall and produced more generic essay quality than the other groups.

  3. 3

    EEG measures did not fully normalize after stopping AI use, suggesting lingering negative effects from prior reliance.

  4. 4

    Chatbots can create an “illusion of learning” by making explanations feel understandable without doing the effortful processing that builds schemas and durable expertise.

  5. 5

    Repeated AI use can prevent learners from developing the habit of processing difficult material independently, increasing long-term dependence.

  6. 6

    Hallucinations can cause learners to internalize incorrect information when they lack the expertise to verify outputs.

  7. 7

    AI should be used to support non-core steps (overviews, perspectives, resource discovery) while keeping the user responsible for deep critical thinking and follow-up study.

Highlights

EEG monitoring in “Your Brain on Chat GPT” found the LLM group had lower brain activity and weaker connectivity than search-only and brain-only groups.
Even after stopping AI, the LLM group’s brain measures didn’t return to baseline, pointing to residual effects.
The core learning problem is framed as “cognitive bypassing”: skipping the effortful processing that turns information into memory and expertise.
The transcript argues hallucination risk is highest for novices because they can’t reliably detect false but plausible answers.
A practical rule: use AI for overviews and perspectives, then do the hard work—interrogating gaps and studying primary sources—to build real ownership of knowledge.

Topics

  • ChatGPT Learning
  • EEG Study
  • Cognitive Offloading
  • Illusion of Learning
  • Hallucinations

Mentioned