Get AI summaries of any video or article — Sign up free
The Role of AI in the Future of Work: A Conversation with Jeremy Utley | APQC 2025 Conference thumbnail

The Role of AI in the Future of Work: A Conversation with Jeremy Utley | APQC 2025 Conference

APQC·
6 min read

Based on APQC's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI amplifies existing cognitive biases, so teams must pair AI adoption with metacognitive awareness to avoid faster mediocre decisions.

Briefing

AI’s biggest workplace impact isn’t that it replaces people—it accelerates whatever cognitive habits already exist, then reshapes how knowledge and process work get done. Jeremy Utley argues that technology acts like an amplifier: when teams understand common thinking traps, AI helps them bypass them faster; when they don’t, AI can speed up mediocre decisions by locking people onto early, low-quality ideas. A key example is the “Einsteining bias,” a tendency to fixate on an initial solution regardless of quality. With AI, the path to both good and bad ideas can become faster—so the real differentiator becomes metacognitive awareness: knowing that humans gravitate toward quick, easy answers and using AI in ways that counter that pull (for instance, prompting for many ideas while actively resisting premature fixation).

That bias problem connects directly to how organizations should integrate AI into everyday collaboration. Utley describes “knowledge cafe” style sessions—human-led idea-building with rotating participation—then suggests feeding the outputs into AI for expansion, devil’s-advocate critique, or role-based questioning (including asking for missing expert perspectives). The goal isn’t just more brainstorming; it’s creating a feedback loop where communities of practice share use cases and spark new ones. In that environment, AI becomes a tool for widening the range of possibilities rather than narrowing attention to the first acceptable answer.

Utley then ties AI to knowledge management and process management, arguing that generative AI can function as an expanded short-term memory for organizations. Instead of relying on what’s readily accessible in a person’s head or a static wiki, teams can query richer context—helpful when searching for examples, contract language, or prior decisions. He illustrates this with his own experience reading Idea Flow: he had to pause and take notes because he couldn’t reliably recall stories and research later. He also cites Guy Kawasaki’s “Kawasaki GPT,” built from Kawasaki’s books, which can answer questions about his work even when Kawasaki himself doesn’t immediately know the response.

The practical payoff, Utley says, shows up in cycle-time reductions when process knowledge is paired with AI assistance. He offers two examples: a fashion design workflow that reportedly shrank from two months to two minutes for concept-to-sample-to-photo-shoot; and work with the National Park Service, where facilities managers preparing documentation for statements of work and proposals reportedly cut turnaround time from two or three days to about three hours, with one tool prototype built in 45 minutes. Utley estimates the broader impact could save thousands of labor days, and he emphasizes that these gains don’t require technical expertise—just familiarity with the process and basic generative AI skills.

Finally, Utley reframes influence and change management for the AI era. Every individual can access an assistant, expert, coach, and creative partner, so leadership becomes less about authority and more about enabling people to use AI effectively. He warns that “silence” from leaders increases fear and uncertainty, while the real job is clarifying expectations and redefining AI as “augmented I”—work guided and steered by humans with AI as support. His actionable advice is simple: experiment daily (even 15 minutes), ask “Have you tried ChatGPT?” often, and treat use-case discovery as an ongoing practice rather than a one-time rollout.

Cornell Notes

AI’s workplace value hinges on human cognition and organizational habits, not just automation. Utley argues AI amplifies whatever biases teams already have—speeding both strong and weak decisions—so metacognitive awareness becomes essential. He connects this to knowledge management by describing generative AI as an expanded short-term memory that can surface relevant stories, research, contract details, and examples faster than people can recall unaided. He also links AI to process management, citing cases where workflows dropped from months to minutes and from days to hours after process-aware tools were built with minimal technical skill. The influence challenge shifts from top-down authority to enabling individuals to use AI as “augmented I,” with clear expectations and daily experimentation.

Why does AI risk accelerating mediocre decisions, and what is the “Einsteining bias” example meant to show?

Utley describes “Einsteining bias” (fixating on an early solution regardless of quality). With AI, the speed of idea generation increases—but so does the speed of locking onto the first acceptable answer. If people don’t recognize their own tendency to grab a rapid, easy response, AI can push them toward mediocre ideas faster. The practical takeaway is that teams must pair AI use with awareness and countermeasures, such as generating many options while resisting premature fixation.

How can organizations use AI without losing the benefits of human collaboration and critique?

Utley points to “knowledge cafe” sessions where people rotate, build on a shared problem, and generate many ideas. Then AI can be used to expand the idea set, play devil’s advocate, or adopt roles that stress critique—such as acting like a brutally honest reviewer or asking for missing expert perspectives. The emphasis stays on community-of-practice learning: sharing use cases so participants realize what they hadn’t considered and iterate upward.

What does “augmented I” mean, and why does Utley think redefining AI this way matters?

Instead of treating AI as a faceless, autonomous force, Utley frames it as “augmented I”—the human still steers, guides, mentors, and provides direction, while AI supplies support. He argues that future work will increasingly require people to justify whether they used AI appropriately, similar to how a radiologist can’t credibly claim they relied only on unaided judgment. This reframing shifts expectations from “Did you use AI?” to “How did you use AI to improve the quality of your work while staying accountable?”

How does generative AI improve knowledge management in concrete terms?

Utley argues generative AI can act like a broader, more reliable short-term memory than humans can maintain. When people search for examples—sales tactics, negotiation language, or prior contract details—they’re limited by what they can retrieve. AI can surface more relevant context quickly, including proprietary stories and research that aren’t always top-of-mind. He illustrates this with his own experience needing to pause his audiobook to take notes, and with Guy Kawasaki’s “Kawasaki GPT,” which can answer questions about Kawasaki’s books even when Kawasaki doesn’t immediately know.

What evidence is offered that AI can transform process management, and what’s the common thread?

Utley cites a fashion design workflow reportedly shrinking from two months to two minutes for concept-to-garment-photo-shoot. He also describes National Park Service facilities management: preparing documentation for statements of work and proposals reportedly fell from two or three days to about three hours after training, and one prototype tool was built in 45 minutes by a non-technical staff member. The common thread is process knowledge plus AI augmentation—people closest to the workflow use AI to generate drafts, structure documentation, and reduce laborious steps.

What should leaders do to reduce fear and uncertainty during AI adoption?

Utley argues that leader silence is the most dangerous tactic because it leaves people in limbo. Instead, leaders should clarify what can and can’t be done and what’s expected. He also stresses that existentially AI isn’t “taking jobs” in the abstract, but humans who are adept with AI will replace humans who aren’t—so the organization’s responsibility is to enable skill-building and experimentation.

Review Questions

  1. How does recognizing cognitive biases change the way teams should prompt and evaluate AI-generated ideas?
  2. What practical steps does Utley recommend for individuals to develop AI use cases, and why does he emphasize experimentation over a fixed playbook?
  3. In Utley’s framework, how do knowledge management and process management reinforce each other when AI is used as “augmented I”?

Key Points

  1. 1

    AI amplifies existing cognitive biases, so teams must pair AI adoption with metacognitive awareness to avoid faster mediocre decisions.

  2. 2

    Generating many ideas with AI works best when humans actively resist early fixation and quick, easy answers.

  3. 3

    Communities of practice (like knowledge cafe sessions) can use AI for expansion, critique, and role-based questioning to improve idea quality.

  4. 4

    Generative AI can function as an expanded short-term memory, helping teams retrieve relevant stories, research, and contract examples more reliably than unaided recall.

  5. 5

    Process management improves when people closest to the workflow use AI to draft and structure documentation, often without advanced technical skills.

  6. 6

    Leader silence increases uncertainty; clear expectations and training reduce fear and accelerate effective adoption.

  7. 7

    AI should be framed as “augmented I,” where humans steer and remain accountable for the work product.

Highlights

AI can speed up both great and mediocre ideas; without bias awareness, teams may lock onto early solutions faster.
“Augmented I” reframes AI as human-guided augmentation rather than an autonomous authority, changing how work quality and accountability are judged.
Knowledge cafe-style collaboration can be strengthened by using AI for devil’s-advocate critique and missing-expert prompts.
Process workflows can collapse dramatically—months to minutes and days to hours—when AI is applied to process-aware documentation tasks.
Utley argues that leader silence is more dangerous than the technology itself because it leaves people uncertain about expectations.

Mentioned

  • Jeremy Utley
  • Guy Kawasaki
  • Abraham Luchans
  • Edith Luchans
  • Adam Rhymer