Get AI summaries of any video or article — Sign up free
Unlocking the Synergy Between Knowledge Management and AI thumbnail

Unlocking the Synergy Between Knowledge Management and AI

APQC·
6 min read

Based on APQC's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI investment is rising quickly, but reliable outcomes depend on governed, curated knowledge rather than raw generative capability.

Briefing

The central takeaway is that generative AI delivers reliable, scalable business value only when it’s built on a disciplined knowledge management foundation—governed content, curated sources, and controlled user workflows. NARVARS’ approach pairs its knowledge platform with a generative AI layer designed to answer specific business questions without hallucinating, while change management ensures employees actually adopt the system.

APQC research set the urgency. Organizations report heavy and rising AI investment: about 43% say they’re at least moderately invested (with 35% at least slightly invested), and 86% expect to increase AI spending over the next three years. At the same time, AI deployment benefits cluster around operational efficiency and productivity, improved search, and faster access to expertise—while adoption still lags for roughly a quarter of respondents, often due to regulation and data privacy concerns.

APQC also highlighted a maturity gap that matters for AI outcomes. In APQC’s self-reported maturity model (levels 1–5), more mature knowledge management programs correlate with more advanced technology deployment: 51% of respondents at higher KM maturity report piloting, implementing, operating, or even optimizing solutions, versus 38% among less mature organizations. Common drivers and barriers—documented processes, competing organizational change, and structured content—frame why AI projects stall when knowledge practices aren’t in place.

NARVARS’ story explains what “KM foundation” means in practice. The company built a knowledge management solution called Sherlock, then added a generative AI capability for a targeted use case: synthesizing market research and insights to support product launch decisions. The team started with a concrete business problem—reducing duplicated effort and spend on market research—rather than chasing broad “AI for everything” ambitions.

A key design principle is governance and trust. Instead of letting users feed arbitrary prompts or unvetted content, NARVARS “rails” the interaction to a specific business workflow and uses retrieval-augmented generation (RAG) over trusted knowledge. Answers go through a self-validation loop: the system checks whether it can substantiate responses from the underlying content; if it can’t, it returns “I don’t have an answer.” When it does answer, it embeds clickable references to the source documents and even the page locations.

To further reduce risk, the solution includes “watchouts,” a coaching layer that flags reliability and contextual considerations so less experienced users know when to adjust how they ask questions. This approach directly targets the failure modes of generic chatbots—especially in regulated environments—where incorrect or unsupported answers can create compliance and safety problems.

NARVARS also quantified impact. Marketing teams can get answers in minutes instead of hours or days, and insights work that previously took six to nine months can shrink to weeks. In one real example, two parts of the organization asked whether patients prefer blister or bottle packaging for a specific drug. One team spent $50,000–$100,000 and three months on primary research; another answered the same question using Sherlock plus deep sites in about three weeks, leveraging knowledge spanning roughly 6,700 patients—enabling leaders to make manufacturing decisions with greater confidence.

Finally, adoption is treated as a core deliverable, not an afterthought. Through awareness, capability building, and a champions network, NARVARS drives usage of Sherlock and deep sites. Change management emphasizes leadership sponsorship, two-way feedback, pilots, and “test and learn” campaigns that use behavioral science principles (like authority and availability bias) to manage resistance and set realistic expectations about what the AI will and won’t do. The result is a human-AI symbiosis model: AI accelerates information processing and synthesis, while people validate, supplement, and apply insights to decisions.

Cornell Notes

Generative AI can’t be safely scaled without a knowledge management foundation that makes content trusted, governed, and easy to retrieve. NARVARS built Sherlock to centralize and curate knowledge, then layered a generative AI capability (deep sites) that answers specific business questions using retrieval from that trusted content. Answers include a self-validation step to reduce hallucinations and provide clickable references to the exact source documents and pages. “Watchouts” coach users on reliability and context, helping less experienced employees use the tool effectively. Adoption is treated as part of the system: leadership sponsorship, pilots, champions, and test-and-learn messaging drive daily usage and manage expectations.

Why do AI initiatives fail when knowledge management isn’t in place?

The discussion points to scalability, sustainability, and compliance. Scalability breaks when prototypes work on a small set of content but can’t expand to thousands of documents. Sustainability breaks when content changes and outdated material remains in the system without governance. Compliance breaks when legal, privacy, and ethics requirements aren’t satisfied—especially when AI is allowed to generate answers from unvetted or inaccessible sources. The fix is a KM foundation: consolidate content into a governed pipeline, curate it, and control access and workflows.

What makes NARVARS’ generative AI answers more trustworthy than a generic chatbot?

The deep sites layer is constrained to a specific use case and fed by Sherlock’s trusted knowledge. It uses a self-validation loop: after generating an answer via RAG, it checks whether the response can be substantiated by the underlying content. If it can’t validate, it returns no answer rather than guessing. When it does answer, it embeds references that users can click to the source document and even the page where the information came from.

How do “watchouts” change the user experience?

Watchouts act like a coach. When a user generates an answer, the system also assesses reliability and contextual considerations, then prompts the user to adjust—such as rephrasing a question or recognizing that some information may be less certain. This matters because users vary in sophistication; the tool helps less experienced employees avoid over-trusting weak or context-mismatched outputs.

How did NARVARS choose where to apply AI first?

The team started with a single, quantifiable business problem: duplicated effort and spend on market research and insights. Rather than deploying AI broadly, they aligned KM and AI to a value proposition that could earn buy-in and survive organizational change. This focus also made it easier to measure outcomes like speed to answer and reduced research costs.

What real-world example showed the business impact?

For a specific drug, two parts of the organization asked whether patients prefer blister or bottle packaging. One team conducted primary research, spending $50,000–$100,000 and taking about three months to survey roughly 50 patients. Another team used Sherlock plus deep sites to answer the same question in about three weeks without new external spend, drawing on trusted knowledge covering about 6,700 patients—supporting manufacturing decisions with greater confidence.

What change-management tactics drove adoption?

Adoption relied on awareness (what Sherlock/deep sites are and when to use them), capability (how to use them effectively), and value (why it matters). Tactics included leadership sponsorship, two-way communication, a champions network across countries, pilots to refine how questions should be asked, and behavioral-science-based test-and-learn campaigns (e.g., authority and availability bias). The approach also proactively addressed resistance and clarified limitations to manage expectations.

Review Questions

  1. What three failure modes (scalability, sustainability, compliance) were described as common reasons AI initiatives fail without KM, and how does a governed knowledge pipeline address each?
  2. How does deep sites’ self-validation and reference embedding reduce hallucinations, and what role do watchouts play for different user skill levels?
  3. Why did NARVARS start with a narrow market-research use case instead of deploying AI broadly, and how did that choice affect measurement and adoption?

Key Points

  1. 1

    AI investment is rising quickly, but reliable outcomes depend on governed, curated knowledge rather than raw generative capability.

  2. 2

    More mature knowledge management programs correlate with more advanced and optimized technology deployment outcomes.

  3. 3

    NARVARS’ Sherlock platform provides the trusted knowledge pipeline that deep sites uses to answer questions with substantiation.

  4. 4

    deep sites reduces hallucinations through a self-validation loop and by returning clickable references to the exact source documents and pages.

  5. 5

    “Watchouts” coach users by flagging reliability and contextual considerations, improving results across different user skill levels.

  6. 6

    Adoption requires more than technology: leadership sponsorship, champions, pilots, and test-and-learn messaging drive daily usage and manage resistance.

  7. 7

    NARVARS measured value through speed-to-answer, reduced research spend, and faster insight cycles that support real business decisions.

Highlights

AI answers become dependable when they’re grounded in governed knowledge and constrained workflows—not when users are left to query unvetted content.
deep sites uses a self-validation loop and provides clickable citations down to the page level, turning “trust me” outputs into verifiable claims.
NARVARS cut a market-research cycle from months to weeks for a packaging preference question by leveraging trusted knowledge instead of repeating primary research.
Change management treated adoption as a system requirement: leadership authority, champions, pilots, and behavioral-science-based test-and-learn campaigns.

Mentioned

  • APQC
  • Sherlock
  • deep sites
  • Gartner
  • ChatGPT
  • RAG
  • Linda Broxi
  • Ian Joseph
  • Lorena Geronimo
  • Joe Pasari
  • KM
  • AI
  • RAG
  • LLM
  • MSL
  • Q&A