Get AI summaries of any video or article — Sign up free
Founder Fridays: Building AI People Trust with Scott Shumaker, Persona AI & Shivani Sharma, Notion thumbnail

Founder Fridays: Building AI People Trust with Scott Shumaker, Persona AI & Shivani Sharma, Notion

Notion·
5 min read

Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Persona AI prioritizes trust by avoiding fully hyperrealistic avatars that could mislead users, aiming instead for authentic emotional engagement.

Briefing

Expressive AI agents built for real human trust hinge on more than smarter models: they require emotionally natural interaction design, careful visual choices, and training that handles the “human weirdness” people notice instantly. Scott Shumaker, co-founder of Persona AI, describes building video agents that can hold engaging conversations and complete useful tasks—without triggering the uncanny, “robocall” feeling that makes users distrust what they’re talking to.

A key design decision is avoiding fully hyperrealistic avatars. The goal isn’t to trick people into believing they’re speaking to a person; it’s to create authenticity while still delivering emotional engagement. Shumaker compares the problem to infuriating robocalls: once users realize they’ve been misled, they feel unheard and the conversation collapses. Persona AI instead trains models to reflect emotions and conversation dynamics—like referencing shared points, noticing what surprised the other person, and responding in ways that feel timely rather than robotic.

That attention to human intuition shows up in a concrete example from Persona AI’s development. When Shumaker’s co-founder tested an early agent, it responded to a question about background by continuing as if everything were normal after mentioning relocation due to the Palisades fire. The agent’s “normal AI” flow missed the human reaction—someone would pause and ask what happened. Persona AI responded by building targeted training so the agent can follow up appropriately when users share emotionally salient context.

The hardest part of making interactions compelling is balancing what the agent needs to accomplish with what the human wants in the moment. Shumaker frames this as a subtle timing and behavior problem: the agent must stay on task while also matching emotional responses at the right moments, so the interaction feels natural rather than scripted.

Persona AI’s approach also borrows from game design. Shumaker argues that great games succeed through a combined “alchemy” of art, sound, design, and technology—not a single feature. Persona AI applies that same multi-disciplinary thinking to AI: expressive video generation, autonomous conversation, and immersive interactivity. The company builds teams with film and game backgrounds and runs R&D on how AI drives visuals, not just how it talks.

On the technical and founder side, Shumaker warns against overinvesting in latency plumbing or model training because frontier progress tends to improve those “for free.” What stays hard is the product goal: shrinking the “human-in-the-loop” loop and expanding how autonomous agents can be while still delivering increasing value. He also emphasizes that AI adoption cycles are compressing—enterprises are moving from multi-year deals toward shorter contracts—so founders should avoid giant bets that may become obsolete before launch.

Scaling lessons from Google, Credit Karma, and Microsoft shape how he runs Persona AI: founders must learn delegation without losing visibility, adjust expectations as teams grow, and make hard decisions early. In the end, Shumaker’s advice is to move faster and narrow scope, because technical moats won’t last as long as they used to; durability may shift toward brand, network effects, customer relationships, and proprietary data. The overarching message: trust and engagement are engineered outcomes, not byproducts of model capability.

Cornell Notes

Persona AI aims to make expressive video agents feel compelling and trustworthy by engineering for human expectations, not just better language. Scott Shumaker highlights that hyperrealistic avatars can backfire by implying deception, so the product favors authenticity while training models to reflect emotions, timing, and conversation dynamics. A specific failure—an agent continuing “normally” after a user mentioned relocating due to the Palisades fire—led to targeted training so the agent can respond with the kind of follow-up a human would ask. Shumaker also argues that frontier model improvements reduce the payoff of heavy latency/model-training bets, while the enduring challenge is shrinking the human-in-the-loop loop and expanding autonomy safely. Compressed adoption timelines mean founders should shorten horizons and narrow scope to ship faster.

Why does Persona AI avoid fully hyperrealistic avatars, and how does that connect to trust?

The company’s early choice is to avoid avatars that could “trick” users into thinking they’re speaking to a person. Shumaker compares it to robocalls: once people realize they’ve been misled, they become more angry than if they had known from the start. Persona AI instead focuses on emotional authenticity—training models to reflect engagement and emotions in a way that feels like real conversation rather than performance.

What does the Palisades fire example reveal about “human intuition” in AI conversations?

In testing, an early agent responded to a question about background by continuing as if nothing unusual happened after the co-founder said he relocated because of the Palisades fire. Shumaker notes that a human would pause and ask what happened. Persona AI used that mismatch as a training signal, adding logic/training so the agent can recognize emotionally salient context and ask appropriate follow-ups.

What makes AI interactions “compelling” in Persona AI’s view?

Compelling interactions require balancing the agent’s goal with the human’s interests in real time. Shumaker describes a subtle tradeoff: the agent must stay on track to accomplish useful tasks while also responding emotionally at the right moments. He frames this as timing and behavior—getting the “feel” right so it doesn’t become robotic even when it’s performing.

How does game design thinking influence Persona AI’s product strategy?

Shumaker argues that great games are an integrated experience—art, sound, design, and technology combine into an immersive whole. Persona AI applies that same multi-disciplinary approach to AI agents: expressive video generation, autonomous conversation, and interactivity designed to keep users immersed. The team composition reflects this, with members drawn from film and game backgrounds.

What does Shumaker say is worth investing in versus what improves “for free” at the frontier?

He cautions founders not to overinvest in latency plumbing or model training because frontier labs and open-source ecosystems improve those rapidly and often cheaply over time. Persona AI still does latency work because it’s a latency-sensitive app, but the bigger enduring challenge is product-level: shrinking the human-in-the-loop loop and expanding safe autonomy so agents deliver increasing value.

How do compressed timelines change how founders should prioritize problems?

Shumaker says prediction horizons are shrinking. Enterprises increasingly sign shorter contracts (often around one year rather than three), so multi-year projects carry higher risk of arriving after the destination has changed. He recommends shorter execution cycles—shipping in a quarter or two—mixing urgent customer needs with smaller “horizon 2/3” bets rather than banking on far-future assumptions.

Review Questions

  1. What trust risks arise from hyperrealistic avatars, and what alternative does Persona AI use to maintain authenticity?
  2. In the Palisades fire example, what specific conversational behavior was missing, and how did that inform training changes?
  3. How does Shumaker define the enduring hard problem in AI products if latency and model training improve quickly?

Key Points

  1. 1

    Persona AI prioritizes trust by avoiding fully hyperrealistic avatars that could mislead users, aiming instead for authentic emotional engagement.

  2. 2

    Compelling agent conversations depend on timing and behavior that balance task completion with the user’s emotional interests.

  3. 3

    Targeted training can correct “human intuition” failures, such as adding follow-up questions when users share emotionally salient context (e.g., relocating due to the Palisades fire).

  4. 4

    Game design principles—integrating art, interaction, and technology—inform Persona AI’s approach to expressive video agents and immersion.

  5. 5

    Frontier progress reduces the payoff of heavy investment in latency plumbing and model training, but product-level autonomy and shrinking the human-in-the-loop loop remain hard.

  6. 6

    AI adoption cycles are compressing, so founders should shorten horizons and avoid large multi-year bets that may become obsolete before launch.

  7. 7

    Scaling engineering organizations requires delegation with visibility, expectation adjustments as headcount grows, and early hard decision-making.

Highlights

Avoiding hyperrealistic avatars is a trust strategy: Persona AI doesn’t want users to feel “tricked,” like they do with robocalls.
A single conversational mismatch—continuing normally after a user mentioned the Palisades fire—became a training lesson for how humans react to emotionally charged context.
The core technical/product challenge isn’t latency or model training; it’s shrinking the human-in-the-loop loop while expanding safe autonomy.
Compressed enterprise timelines push founders toward shorter delivery cycles (quarters) instead of multi-year projects.
Technical moats may decay faster than before, so durability may shift toward brand, relationships, network effects, and proprietary data.

Topics

  • Trust Design
  • Expressive Video Agents
  • Autonomous Conversation
  • Human-in-the-Loop
  • Founder Scaling Lessons

Mentioned

  • Scott Shumaker
  • Shivani Sharma
  • Siobhani Chararma
  • Andy