Unlocking the Synergy Between Knowledge Management and AI
Based on APQC's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI investment is rising quickly, but reliable outcomes depend on governed, curated knowledge rather than raw generative capability.
Briefing
The central takeaway is that generative AI delivers reliable, scalable business value only when it’s built on a disciplined knowledge management foundation—governed content, curated sources, and controlled user workflows. NARVARS’ approach pairs its knowledge platform with a generative AI layer designed to answer specific business questions without hallucinating, while change management ensures employees actually adopt the system.
APQC research set the urgency. Organizations report heavy and rising AI investment: about 43% say they’re at least moderately invested (with 35% at least slightly invested), and 86% expect to increase AI spending over the next three years. At the same time, AI deployment benefits cluster around operational efficiency and productivity, improved search, and faster access to expertise—while adoption still lags for roughly a quarter of respondents, often due to regulation and data privacy concerns.
APQC also highlighted a maturity gap that matters for AI outcomes. In APQC’s self-reported maturity model (levels 1–5), more mature knowledge management programs correlate with more advanced technology deployment: 51% of respondents at higher KM maturity report piloting, implementing, operating, or even optimizing solutions, versus 38% among less mature organizations. Common drivers and barriers—documented processes, competing organizational change, and structured content—frame why AI projects stall when knowledge practices aren’t in place.
NARVARS’ story explains what “KM foundation” means in practice. The company built a knowledge management solution called Sherlock, then added a generative AI capability for a targeted use case: synthesizing market research and insights to support product launch decisions. The team started with a concrete business problem—reducing duplicated effort and spend on market research—rather than chasing broad “AI for everything” ambitions.
A key design principle is governance and trust. Instead of letting users feed arbitrary prompts or unvetted content, NARVARS “rails” the interaction to a specific business workflow and uses retrieval-augmented generation (RAG) over trusted knowledge. Answers go through a self-validation loop: the system checks whether it can substantiate responses from the underlying content; if it can’t, it returns “I don’t have an answer.” When it does answer, it embeds clickable references to the source documents and even the page locations.
To further reduce risk, the solution includes “watchouts,” a coaching layer that flags reliability and contextual considerations so less experienced users know when to adjust how they ask questions. This approach directly targets the failure modes of generic chatbots—especially in regulated environments—where incorrect or unsupported answers can create compliance and safety problems.
NARVARS also quantified impact. Marketing teams can get answers in minutes instead of hours or days, and insights work that previously took six to nine months can shrink to weeks. In one real example, two parts of the organization asked whether patients prefer blister or bottle packaging for a specific drug. One team spent $50,000–$100,000 and three months on primary research; another answered the same question using Sherlock plus deep sites in about three weeks, leveraging knowledge spanning roughly 6,700 patients—enabling leaders to make manufacturing decisions with greater confidence.
Finally, adoption is treated as a core deliverable, not an afterthought. Through awareness, capability building, and a champions network, NARVARS drives usage of Sherlock and deep sites. Change management emphasizes leadership sponsorship, two-way feedback, pilots, and “test and learn” campaigns that use behavioral science principles (like authority and availability bias) to manage resistance and set realistic expectations about what the AI will and won’t do. The result is a human-AI symbiosis model: AI accelerates information processing and synthesis, while people validate, supplement, and apply insights to decisions.
Cornell Notes
Generative AI can’t be safely scaled without a knowledge management foundation that makes content trusted, governed, and easy to retrieve. NARVARS built Sherlock to centralize and curate knowledge, then layered a generative AI capability (deep sites) that answers specific business questions using retrieval from that trusted content. Answers include a self-validation step to reduce hallucinations and provide clickable references to the exact source documents and pages. “Watchouts” coach users on reliability and context, helping less experienced employees use the tool effectively. Adoption is treated as part of the system: leadership sponsorship, pilots, champions, and test-and-learn messaging drive daily usage and manage expectations.
Why do AI initiatives fail when knowledge management isn’t in place?
What makes NARVARS’ generative AI answers more trustworthy than a generic chatbot?
How do “watchouts” change the user experience?
How did NARVARS choose where to apply AI first?
What real-world example showed the business impact?
What change-management tactics drove adoption?
Review Questions
- What three failure modes (scalability, sustainability, compliance) were described as common reasons AI initiatives fail without KM, and how does a governed knowledge pipeline address each?
- How does deep sites’ self-validation and reference embedding reduce hallucinations, and what role do watchouts play for different user skill levels?
- Why did NARVARS start with a narrow market-research use case instead of deploying AI broadly, and how did that choice affect measurement and adoption?
Key Points
- 1
AI investment is rising quickly, but reliable outcomes depend on governed, curated knowledge rather than raw generative capability.
- 2
More mature knowledge management programs correlate with more advanced and optimized technology deployment outcomes.
- 3
NARVARS’ Sherlock platform provides the trusted knowledge pipeline that deep sites uses to answer questions with substantiation.
- 4
deep sites reduces hallucinations through a self-validation loop and by returning clickable references to the exact source documents and pages.
- 5
“Watchouts” coach users by flagging reliability and contextual considerations, improving results across different user skill levels.
- 6
Adoption requires more than technology: leadership sponsorship, champions, pilots, and test-and-learn messaging drive daily usage and manage resistance.
- 7
NARVARS measured value through speed-to-answer, reduced research spend, and faster insight cycles that support real business decisions.