Capturing tacit knowledge (contd.)
Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Prevent response bias by avoiding question formats that encourage uniform or “middle” ratings across items.
Briefing
Capturing tacit knowledge from experts depends less on asking more questions and more on designing an interview and decision process that produces consistent, usable information. When interviews are used, the knowledge developer has to prevent response biases (such as experts rating everything uniformly high or low, or choosing the “middle” to avoid extremes), maintain consistency across related questions, ensure the expert understands the questions, and keep the interaction non-hostile. Hostility or frustration can make experts defensive and less willing to share, so patience and a positive tone are treated as practical tools for eliciting deeper knowledge. Question design matters too: questions should be standard to the domain, not open-ended “uncharted” prompts that invite non-answers, and they should be short enough to be answerable. Lengthy questions and long interviews (often kept to roughly 30–40 minutes) risk losing the expert’s momentum and interest.
Even with well-run interviews, tacit knowledge often resists clean verbalization. Experts may rely on “shades of grey” reasoning—where outcomes aren’t black-and-white—leading to vague or conditional answers (“it may happen or it may not”). In those cases, the knowledge developer needs to handle uncertainty rather than forcing false specificity. Quality also drops when the problem is too general; the developer must identify the problem domain and ask targeted questions that match what the expert can realistically provide. Compatibility and rapport are repeatedly emphasized: experts who don’t like the knowledge developer or don’t trust the process may withhold information, making interpersonal relationship-building a prerequisite for knowledge capture.
Because interviews can’t capture everything, the transcript shifts to complementary methods for extracting knowledge from practice and group reasoning. On-site observation replaces talk with watching: the knowledge developer goes into the field, records what the expert does, and captures workflow details that the expert may struggle to explain. Brainstorming is presented as an idea-generation engine, with a clear two-stage rhythm—generate many ideas first (with emphasis on frequency, not evaluation), then evaluate and build consensus. For larger groups, computer-aided platforms can connect multiple experts electronically to expand idea volume.
Protocol analysis turns expert discussion into scenario-based reasoning. Multiple experts debate a concrete case (the transcript uses the demonetisation of 500 and 1000 rupee notes as an example), weighing advantages and disadvantages across impacts on people, the economy, technology, politics, and the environment. The method then projects alternative futures—such as what happens if government leadership changes—by mapping a sequence of events and outcomes into a structured “protocol” or flow of likely developments.
Other group methods aim at convergence. Consensus decision-making follows brainstorming: experts test and evaluate options, then standardize decisions into a shared plan (the toothpaste example illustrates agreeing on content, strategy, and features). Delphi uses multistage rounds where the group’s interim results are redistributed until consensus emerges. Concept mapping represents knowledge as a network of nodes and links, requiring careful definition of both concrete and abstract concepts through attributes and example-based cues.
Finally, blackboarding (a shared memory space, manual or electronic) and structured participation techniques manage multiple experts simultaneously. In blackboarding, experts contribute knowledge to a common repository under a control mechanism led by the knowledge developer, who must prevent space conflicts and ensure everyone participates. The transcript frames these methods as iterative and visual—allowing experts to learn from each other’s approaches, refine inputs, and converge on solutions when tacit knowledge can’t be captured through interviews alone.
Cornell Notes
Tacit knowledge capture works best when interviews are engineered to reduce bias, preserve consistency, and keep communication clear and non-hostile. Experts may still provide “grey” or conditional answers, so the knowledge developer must ask targeted, domain-relevant questions and build rapport to encourage disclosure. When interviews fall short, observation captures workflow details that experts can’t easily verbalize. For group knowledge, brainstorming, protocol analysis, consensus methods, Delphi, concept mapping, and blackboarding provide structured ways to generate, evaluate, and converge on solutions using multiple expert perspectives. Across methods, iterative cycles and careful facilitation (especially control mechanisms in blackboarding) are key to ensuring every expert meaningfully contributes.
What specific interview problems can distort expert knowledge, and how can a knowledge developer prevent them?
Why do “grey areas” make tacit knowledge hard to capture through interviews alone?
How does protocol analysis use scenarios to extract knowledge that might change over time?
What is the difference between brainstorming and consensus decision-making in the transcript’s framework?
How does concept mapping represent both concrete and abstract knowledge?
What role does the knowledge developer play in blackboarding, and why is a control mechanism necessary?
Review Questions
- Which interview design choices in the transcript are meant to reduce response bias and improve reliability, and how would you apply them to a new expert interview?
- When an expert gives conditional answers (“may happen or may not”), what follow-up strategy does the transcript imply for extracting usable tacit knowledge?
- Compare concept mapping and blackboarding: what does each method require to define knowledge relationships, and how do they handle abstract concepts or multiple experts?
Key Points
- 1
Prevent response bias by avoiding question formats that encourage uniform or “middle” ratings across items.
- 2
Maintain consistency by checking whether an expert’s answers align across closely related questions.
- 3
Keep interviews short (about 30–40 minutes) and questions concise by splitting long prompts into smaller parts.
- 4
Build rapport and avoid hostile or frustrated behavior; experts become defensive when the interaction feels adversarial.
- 5
When experts use “grey” reasoning, treat conditional answers as meaningful and ask targeted questions that clarify assumptions and boundaries.
- 6
Use on-site observation to capture workflow knowledge that experts struggle to verbalize.
- 7
In blackboarding, rely on a control mechanism to manage shared space and ensure every expert contributes.