The Role of AI in the Future of Work: A Conversation with Jeremy Utley | APQC 2025 Conference
Based on APQC's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI amplifies existing cognitive biases, so teams must pair AI adoption with metacognitive awareness to avoid faster mediocre decisions.
Briefing
AI’s biggest workplace impact isn’t that it replaces people—it accelerates whatever cognitive habits already exist, then reshapes how knowledge and process work get done. Jeremy Utley argues that technology acts like an amplifier: when teams understand common thinking traps, AI helps them bypass them faster; when they don’t, AI can speed up mediocre decisions by locking people onto early, low-quality ideas. A key example is the “Einsteining bias,” a tendency to fixate on an initial solution regardless of quality. With AI, the path to both good and bad ideas can become faster—so the real differentiator becomes metacognitive awareness: knowing that humans gravitate toward quick, easy answers and using AI in ways that counter that pull (for instance, prompting for many ideas while actively resisting premature fixation).
That bias problem connects directly to how organizations should integrate AI into everyday collaboration. Utley describes “knowledge cafe” style sessions—human-led idea-building with rotating participation—then suggests feeding the outputs into AI for expansion, devil’s-advocate critique, or role-based questioning (including asking for missing expert perspectives). The goal isn’t just more brainstorming; it’s creating a feedback loop where communities of practice share use cases and spark new ones. In that environment, AI becomes a tool for widening the range of possibilities rather than narrowing attention to the first acceptable answer.
Utley then ties AI to knowledge management and process management, arguing that generative AI can function as an expanded short-term memory for organizations. Instead of relying on what’s readily accessible in a person’s head or a static wiki, teams can query richer context—helpful when searching for examples, contract language, or prior decisions. He illustrates this with his own experience reading Idea Flow: he had to pause and take notes because he couldn’t reliably recall stories and research later. He also cites Guy Kawasaki’s “Kawasaki GPT,” built from Kawasaki’s books, which can answer questions about his work even when Kawasaki himself doesn’t immediately know the response.
The practical payoff, Utley says, shows up in cycle-time reductions when process knowledge is paired with AI assistance. He offers two examples: a fashion design workflow that reportedly shrank from two months to two minutes for concept-to-sample-to-photo-shoot; and work with the National Park Service, where facilities managers preparing documentation for statements of work and proposals reportedly cut turnaround time from two or three days to about three hours, with one tool prototype built in 45 minutes. Utley estimates the broader impact could save thousands of labor days, and he emphasizes that these gains don’t require technical expertise—just familiarity with the process and basic generative AI skills.
Finally, Utley reframes influence and change management for the AI era. Every individual can access an assistant, expert, coach, and creative partner, so leadership becomes less about authority and more about enabling people to use AI effectively. He warns that “silence” from leaders increases fear and uncertainty, while the real job is clarifying expectations and redefining AI as “augmented I”—work guided and steered by humans with AI as support. His actionable advice is simple: experiment daily (even 15 minutes), ask “Have you tried ChatGPT?” often, and treat use-case discovery as an ongoing practice rather than a one-time rollout.
Cornell Notes
AI’s workplace value hinges on human cognition and organizational habits, not just automation. Utley argues AI amplifies whatever biases teams already have—speeding both strong and weak decisions—so metacognitive awareness becomes essential. He connects this to knowledge management by describing generative AI as an expanded short-term memory that can surface relevant stories, research, contract details, and examples faster than people can recall unaided. He also links AI to process management, citing cases where workflows dropped from months to minutes and from days to hours after process-aware tools were built with minimal technical skill. The influence challenge shifts from top-down authority to enabling individuals to use AI as “augmented I,” with clear expectations and daily experimentation.
Why does AI risk accelerating mediocre decisions, and what is the “Einsteining bias” example meant to show?
How can organizations use AI without losing the benefits of human collaboration and critique?
What does “augmented I” mean, and why does Utley think redefining AI this way matters?
How does generative AI improve knowledge management in concrete terms?
What evidence is offered that AI can transform process management, and what’s the common thread?
What should leaders do to reduce fear and uncertainty during AI adoption?
Review Questions
- How does recognizing cognitive biases change the way teams should prompt and evaluate AI-generated ideas?
- What practical steps does Utley recommend for individuals to develop AI use cases, and why does he emphasize experimentation over a fixed playbook?
- In Utley’s framework, how do knowledge management and process management reinforce each other when AI is used as “augmented I”?
Key Points
- 1
AI amplifies existing cognitive biases, so teams must pair AI adoption with metacognitive awareness to avoid faster mediocre decisions.
- 2
Generating many ideas with AI works best when humans actively resist early fixation and quick, easy answers.
- 3
Communities of practice (like knowledge cafe sessions) can use AI for expansion, critique, and role-based questioning to improve idea quality.
- 4
Generative AI can function as an expanded short-term memory, helping teams retrieve relevant stories, research, and contract examples more reliably than unaided recall.
- 5
Process management improves when people closest to the workflow use AI to draft and structure documentation, often without advanced technical skills.
- 6
Leader silence increases uncertainty; clear expectations and training reduce fear and accelerate effective adoption.
- 7
AI should be framed as “augmented I,” where humans steer and remain accountable for the work product.