Get AI summaries of any video or article — Sign up free
ChatGPT Can Now Call the Cops, but 'Wait till 2100 for Full Job Impact' - Altman thumbnail

ChatGPT Can Now Call the Cops, but 'Wait till 2100 for Full Job Impact' - Altman

AI Explained·
5 min read

Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

OpenAI’s age-assessment system defaults to an under-18 experience when age confidence is low or information is incomplete.

Briefing

OpenAI is rolling out age-assessment features that can restrict adult capabilities for users it believes may be under 18—and in extreme cases, route certain conversations to parents first and then law enforcement. The key operational rule is a conservative default: if the system lacks confidence or has incomplete age information, it will treat the user as under 18 and offer adults a way to verify their age to unlock adult features. Within two weeks, parental controls are expected to include tools like blackout hours that prevent teen access to ChatGPT during set times. If a user appears to be in acute distress, the workflow prioritizes notifying parents before any escalation to authorities.

The announcement matters because it turns conversational AI into a quasi-safety system with real-world consequences. It also raises the central question of accuracy: flagging the wrong conversations could harm users and families, while failing to flag the right ones could undermine the stated safety goal. The transcript flags a further complication—how OpenAI would respond to legal demands from foreign governments with different standards. If regulators require notification or data handling in specific scenarios, companies may face pressure to comply, potentially conflicting with OpenAI’s intended privacy posture.

Alongside the child-safety changes, OpenAI is pursuing stronger privacy and “privilege” protections for adult users’ AI interactions, positioning them as comparable to conversations with professionals like doctors or lawyers. The stated rationale is that people increasingly use AI for sensitive questions and private concerns. The transcript also cites a usage breakdown (for the web version) that frames what adults are doing with ChatGPT: only 4.2% use it for coding, while 10% ask to be taught or tutored, and 5.7% seek fitness, beauty, self-care, or health advice. A notable share—about 4%—spend time asking about the model itself, including questions about consciousness or how it works.

That privacy push has a second-order effect: it could raise compliance burdens for startups and open-source projects if new laws require layered protections for every AI conversation. The transcript expresses skepticism about whether the policy will stay narrowly tailored, warning about regulatory capture and broader legal requirements that effectively “raise the bar” for smaller developers.

Another notable change targets flirtation behavior. The transcript says ChatGPT will no longer refuse flirtation when asked. If the system believes the user is an adult (or the user provides ID proving adulthood), it may also help write fictional stories involving extreme flirtation and self-caused tragedy—an expansion that shifts the boundary from refusal to conditional compliance.

Finally, the transcript connects these product and policy moves to a wider debate about AI’s job impact. It recalls Sam Altman’s earlier private claim that up to 70% of jobs could be eliminated, then contrasts it with later public framing suggesting the full effects might play out toward the end of the century. The discussion broadens into how AI capabilities evolve, including references to research on why language models “hallucinate” and how classification-style training can force confident outputs even when models should say “I don’t know.” The throughline is uncertainty: safety systems, privacy rules, and labor forecasts all hinge on whether models and regulators can be trusted to implement and govern these changes correctly.

Cornell Notes

OpenAI is introducing age-assessment for ChatGPT, with a conservative default: if the system can’t confidently determine a user’s age, it will treat them as under 18 and limit adult capabilities. Adults can unlock adult features by proving their age. For teen users, parental controls are expected within two weeks, including blackout hours, and acute distress triggers a parent-first notification workflow before any escalation to law enforcement. In parallel, OpenAI is pursuing stronger privacy/privilege protections for sensitive adult conversations, but the transcript warns that such rules could raise compliance barriers for startups and open-source developers. It also notes a behavioral shift: flirtation requests should be honored for users identified as adults, including in certain fictional contexts.

What is the core rule behind OpenAI’s new age-handling approach?

The transcript highlights a “safer route” policy: if OpenAI’s system is not confident about someone’s age or has incomplete information, it defaults to the under-18 experience. Adults can unlock adult capabilities by providing ways to prove their age. The practical implication is that uncertainty leads to restrictions rather than guessing the user is an adult.

How will parental involvement and law enforcement escalation work in acute situations?

For teen users showing acute distress, the workflow is described as parent-first: the system flags the situation to parents first and only afterward to law enforcement, and only “in extreme circumstances” depending on the discussion. The transcript emphasizes the importance of getting the targeting right because false positives could cause harm.

What privacy/privilege protections are being pursued, and why might they be controversial?

OpenAI is aiming to give AI conversations a level of protection comparable to conversations with doctors or lawyers, driven by the idea that people use AI for sensitive, private concerns. The transcript’s concern is that if this becomes law with layered requirements, it could raise the compliance burden for startups and open-source initiatives—potentially acting like a “bar” that makes participation harder for smaller players.

What does the transcript say about how people use ChatGPT on the web?

It cites a usage breakdown: 4.2% use ChatGPT for coding, 10% for being tutored or taught, and 5.7% for fitness, beauty, self-care, or health advice. It also notes that about 4% ask questions about the model itself (e.g., consciousness or how it works), and it claims image creation is less used than translation.

What behavioral change is described regarding flirtation?

ChatGPT is said to stop refusing flirtation when users ask for it. If the system believes the user is an adult (or the user provides ID proving adulthood), it may also assist with fictional stories involving extreme flirtation and self-caused tragedy. The boundary shifts from refusal to conditional compliance based on age determination.

How does the transcript connect these policy changes to the job-impact debate?

It recalls Sam Altman’s earlier private claim that up to 70% of jobs could be eliminated by AI, paired with a later public framing suggesting the largest job ramifications might fully play out toward the end of this century. The transcript uses this contrast to illustrate how AI leaders’ timelines and predictions can evolve, while also referencing research on hallucinations and the training incentives that can force confident outputs.

Review Questions

  1. What does the “safer route” default imply for users whose age cannot be confidently verified?
  2. Why could stronger privacy/privilege rules create unintended barriers for startups and open-source projects?
  3. How does the transcript describe the conditions under which law enforcement might be contacted?

Key Points

  1. 1

    OpenAI’s age-assessment system defaults to an under-18 experience when age confidence is low or information is incomplete.

  2. 2

    Adult capabilities are expected to require age verification, with parental controls (including blackout hours) rolling out within two weeks.

  3. 3

    In acute distress scenarios, notifications are described as parent-first, with law enforcement escalation only afterward and only in extreme circumstances.

  4. 4

    OpenAI is pursuing stronger privacy/privilege protections for sensitive adult AI conversations, but the transcript warns this could raise compliance hurdles for smaller developers.

  5. 5

    ChatGPT is expected to comply with flirtation requests when asked for, provided the user is identified as an adult (or proves adulthood).

  6. 6

    The transcript links safety and privacy changes to broader uncertainty about AI’s societal impact, including evolving predictions about job displacement.

  7. 7

    Usage statistics cited for the web version suggest most users are not primarily coding, with tutoring/teaching and health-adjacent queries taking larger shares.

Highlights

The age policy is explicitly conservative: uncertainty about age triggers under-18 restrictions rather than a best guess.
Acute distress handling is framed as a parent-first escalation path, with law enforcement only later and only in extreme cases.
Privacy/privilege protections for adult AI chats are positioned as professional-grade, but could become a regulatory burden for startups.
Flirtation behavior is shifting from refusal to conditional compliance, including in certain fictional contexts for adults.

Topics

  • Age Verification
  • Parental Controls
  • Privacy Privilege
  • Flirtation Policy
  • Job Impact Predictions