Founder Fridays: Building AI People Trust with Scott Shumaker, Persona AI & Shivani Sharma, Notion
Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Persona AI prioritizes trust by avoiding fully hyperrealistic avatars that could mislead users, aiming instead for authentic emotional engagement.
Briefing
Expressive AI agents built for real human trust hinge on more than smarter models: they require emotionally natural interaction design, careful visual choices, and training that handles the “human weirdness” people notice instantly. Scott Shumaker, co-founder of Persona AI, describes building video agents that can hold engaging conversations and complete useful tasks—without triggering the uncanny, “robocall” feeling that makes users distrust what they’re talking to.
A key design decision is avoiding fully hyperrealistic avatars. The goal isn’t to trick people into believing they’re speaking to a person; it’s to create authenticity while still delivering emotional engagement. Shumaker compares the problem to infuriating robocalls: once users realize they’ve been misled, they feel unheard and the conversation collapses. Persona AI instead trains models to reflect emotions and conversation dynamics—like referencing shared points, noticing what surprised the other person, and responding in ways that feel timely rather than robotic.
That attention to human intuition shows up in a concrete example from Persona AI’s development. When Shumaker’s co-founder tested an early agent, it responded to a question about background by continuing as if everything were normal after mentioning relocation due to the Palisades fire. The agent’s “normal AI” flow missed the human reaction—someone would pause and ask what happened. Persona AI responded by building targeted training so the agent can follow up appropriately when users share emotionally salient context.
The hardest part of making interactions compelling is balancing what the agent needs to accomplish with what the human wants in the moment. Shumaker frames this as a subtle timing and behavior problem: the agent must stay on task while also matching emotional responses at the right moments, so the interaction feels natural rather than scripted.
Persona AI’s approach also borrows from game design. Shumaker argues that great games succeed through a combined “alchemy” of art, sound, design, and technology—not a single feature. Persona AI applies that same multi-disciplinary thinking to AI: expressive video generation, autonomous conversation, and immersive interactivity. The company builds teams with film and game backgrounds and runs R&D on how AI drives visuals, not just how it talks.
On the technical and founder side, Shumaker warns against overinvesting in latency plumbing or model training because frontier progress tends to improve those “for free.” What stays hard is the product goal: shrinking the “human-in-the-loop” loop and expanding how autonomous agents can be while still delivering increasing value. He also emphasizes that AI adoption cycles are compressing—enterprises are moving from multi-year deals toward shorter contracts—so founders should avoid giant bets that may become obsolete before launch.
Scaling lessons from Google, Credit Karma, and Microsoft shape how he runs Persona AI: founders must learn delegation without losing visibility, adjust expectations as teams grow, and make hard decisions early. In the end, Shumaker’s advice is to move faster and narrow scope, because technical moats won’t last as long as they used to; durability may shift toward brand, network effects, customer relationships, and proprietary data. The overarching message: trust and engagement are engineered outcomes, not byproducts of model capability.
Cornell Notes
Persona AI aims to make expressive video agents feel compelling and trustworthy by engineering for human expectations, not just better language. Scott Shumaker highlights that hyperrealistic avatars can backfire by implying deception, so the product favors authenticity while training models to reflect emotions, timing, and conversation dynamics. A specific failure—an agent continuing “normally” after a user mentioned relocating due to the Palisades fire—led to targeted training so the agent can respond with the kind of follow-up a human would ask. Shumaker also argues that frontier model improvements reduce the payoff of heavy latency/model-training bets, while the enduring challenge is shrinking the human-in-the-loop loop and expanding autonomy safely. Compressed adoption timelines mean founders should shorten horizons and narrow scope to ship faster.
Why does Persona AI avoid fully hyperrealistic avatars, and how does that connect to trust?
What does the Palisades fire example reveal about “human intuition” in AI conversations?
What makes AI interactions “compelling” in Persona AI’s view?
How does game design thinking influence Persona AI’s product strategy?
What does Shumaker say is worth investing in versus what improves “for free” at the frontier?
How do compressed timelines change how founders should prioritize problems?
Review Questions
- What trust risks arise from hyperrealistic avatars, and what alternative does Persona AI use to maintain authenticity?
- In the Palisades fire example, what specific conversational behavior was missing, and how did that inform training changes?
- How does Shumaker define the enduring hard problem in AI products if latency and model training improve quickly?
Key Points
- 1
Persona AI prioritizes trust by avoiding fully hyperrealistic avatars that could mislead users, aiming instead for authentic emotional engagement.
- 2
Compelling agent conversations depend on timing and behavior that balance task completion with the user’s emotional interests.
- 3
Targeted training can correct “human intuition” failures, such as adding follow-up questions when users share emotionally salient context (e.g., relocating due to the Palisades fire).
- 4
Game design principles—integrating art, interaction, and technology—inform Persona AI’s approach to expressive video agents and immersion.
- 5
Frontier progress reduces the payoff of heavy investment in latency plumbing and model training, but product-level autonomy and shrinking the human-in-the-loop loop remain hard.
- 6
AI adoption cycles are compressing, so founders should shorten horizons and avoid large multi-year bets that may become obsolete before launch.
- 7
Scaling engineering organizations requires delegation with visibility, expectation adjustments as headcount grows, and early hard decision-making.