Get AI summaries of any video or article — Sign up free
Capturing tacit knowledge (contd.) thumbnail

Capturing tacit knowledge (contd.)

Knowledge Management·
6 min read

Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Prevent response bias by avoiding question formats that encourage uniform or “middle” ratings across items.

Briefing

Capturing tacit knowledge from experts depends less on asking more questions and more on designing an interview and decision process that produces consistent, usable information. When interviews are used, the knowledge developer has to prevent response biases (such as experts rating everything uniformly high or low, or choosing the “middle” to avoid extremes), maintain consistency across related questions, ensure the expert understands the questions, and keep the interaction non-hostile. Hostility or frustration can make experts defensive and less willing to share, so patience and a positive tone are treated as practical tools for eliciting deeper knowledge. Question design matters too: questions should be standard to the domain, not open-ended “uncharted” prompts that invite non-answers, and they should be short enough to be answerable. Lengthy questions and long interviews (often kept to roughly 30–40 minutes) risk losing the expert’s momentum and interest.

Even with well-run interviews, tacit knowledge often resists clean verbalization. Experts may rely on “shades of grey” reasoning—where outcomes aren’t black-and-white—leading to vague or conditional answers (“it may happen or it may not”). In those cases, the knowledge developer needs to handle uncertainty rather than forcing false specificity. Quality also drops when the problem is too general; the developer must identify the problem domain and ask targeted questions that match what the expert can realistically provide. Compatibility and rapport are repeatedly emphasized: experts who don’t like the knowledge developer or don’t trust the process may withhold information, making interpersonal relationship-building a prerequisite for knowledge capture.

Because interviews can’t capture everything, the transcript shifts to complementary methods for extracting knowledge from practice and group reasoning. On-site observation replaces talk with watching: the knowledge developer goes into the field, records what the expert does, and captures workflow details that the expert may struggle to explain. Brainstorming is presented as an idea-generation engine, with a clear two-stage rhythm—generate many ideas first (with emphasis on frequency, not evaluation), then evaluate and build consensus. For larger groups, computer-aided platforms can connect multiple experts electronically to expand idea volume.

Protocol analysis turns expert discussion into scenario-based reasoning. Multiple experts debate a concrete case (the transcript uses the demonetisation of 500 and 1000 rupee notes as an example), weighing advantages and disadvantages across impacts on people, the economy, technology, politics, and the environment. The method then projects alternative futures—such as what happens if government leadership changes—by mapping a sequence of events and outcomes into a structured “protocol” or flow of likely developments.

Other group methods aim at convergence. Consensus decision-making follows brainstorming: experts test and evaluate options, then standardize decisions into a shared plan (the toothpaste example illustrates agreeing on content, strategy, and features). Delphi uses multistage rounds where the group’s interim results are redistributed until consensus emerges. Concept mapping represents knowledge as a network of nodes and links, requiring careful definition of both concrete and abstract concepts through attributes and example-based cues.

Finally, blackboarding (a shared memory space, manual or electronic) and structured participation techniques manage multiple experts simultaneously. In blackboarding, experts contribute knowledge to a common repository under a control mechanism led by the knowledge developer, who must prevent space conflicts and ensure everyone participates. The transcript frames these methods as iterative and visual—allowing experts to learn from each other’s approaches, refine inputs, and converge on solutions when tacit knowledge can’t be captured through interviews alone.

Cornell Notes

Tacit knowledge capture works best when interviews are engineered to reduce bias, preserve consistency, and keep communication clear and non-hostile. Experts may still provide “grey” or conditional answers, so the knowledge developer must ask targeted, domain-relevant questions and build rapport to encourage disclosure. When interviews fall short, observation captures workflow details that experts can’t easily verbalize. For group knowledge, brainstorming, protocol analysis, consensus methods, Delphi, concept mapping, and blackboarding provide structured ways to generate, evaluate, and converge on solutions using multiple expert perspectives. Across methods, iterative cycles and careful facilitation (especially control mechanisms in blackboarding) are key to ensuring every expert meaningfully contributes.

What specific interview problems can distort expert knowledge, and how can a knowledge developer prevent them?

The transcript highlights response biases (e.g., experts giving uniformly high/low ratings or choosing the middle option), which can be avoided by designing questions that don’t encourage one-size-fits-all answers. Consistency is another issue: an expert who says “yes” for one question should not contradict that stance on closely related questions, so related prompts must be checked for reliability. Communication difficulties also matter—experts must understand what’s being asked—so wording and clarity are treated as essential. Finally, a hostile attitude can shut experts down; frustration or irritation makes experts defensive and less likely to share, so patience and positivity are required.

Why do “grey areas” make tacit knowledge hard to capture through interviews alone?

When experts reason with shades of grey, they can’t provide concrete, explicit answers. The transcript’s example describes situations where solutions are conditional (“it may happen or it may not happen”), leading to responses like “this is only what I can say” and a refusal to go further. In such cases, the knowledge developer can’t force binary answers; instead, they must work with uncertainty and ask questions that clarify assumptions, boundaries, and likely outcomes rather than demanding absolutes.

How does protocol analysis use scenarios to extract knowledge that might change over time?

Protocol analysis uses multiple experts to debate a case and then build future scenarios. The transcript’s demonetisation example has experts argue advantages and disadvantages, then extend the discussion into impacts on people, processes, technology, politics, economy, and the environment. It also considers that if the government changes the next day, the scenario shifts—so the method maps likely outcomes under different conditions. The result is a structured “protocol” or flow of events and outcomes, often represented as a flowchart.

What is the difference between brainstorming and consensus decision-making in the transcript’s framework?

Brainstorming is organized around idea generation first: participants produce as many ideas as possible, with emphasis on frequency rather than evaluation. Only after ideas are generated does evaluation begin, followed by sorting and consensus-building. Consensus decision-making then focuses on standardizing the chosen approach—testing and evaluating options (content, features, marketing strategy in the toothpaste example) and arriving at a shared, standardized plan, even if it takes longer.

How does concept mapping represent both concrete and abstract knowledge?

Concept mapping represents knowledge as a network of nodes and links. Concrete concepts (like “chair”) can be defined through visible attributes such as top, back, arms, legs, and material. Abstract concepts (like “honesty”) lack visual images, so the transcript says to define them using attributes inferred from examples—e.g., transparency, objectivity, and accountability. The quality of the map depends on identifying the right nodes and linking attributes so relationships can be understood and used for problem-solving.

What role does the knowledge developer play in blackboarding, and why is a control mechanism necessary?

Blackboarding uses a shared global memory structure—manual or electronic—where experts contribute knowledge to solve a problem. The knowledge developer acts as moderator and must control the flow and organization of information so every expert can participate. A control mechanism is necessary because limited space on the blackboard can cause conflicts: one expert might occupy too much space, leaving others unable to contribute. Proper control ensures balanced participation, efficient storage and visualization, and iterative refinement as experts learn from each other’s contributions.

Review Questions

  1. Which interview design choices in the transcript are meant to reduce response bias and improve reliability, and how would you apply them to a new expert interview?
  2. When an expert gives conditional answers (“may happen or may not”), what follow-up strategy does the transcript imply for extracting usable tacit knowledge?
  3. Compare concept mapping and blackboarding: what does each method require to define knowledge relationships, and how do they handle abstract concepts or multiple experts?

Key Points

  1. 1

    Prevent response bias by avoiding question formats that encourage uniform or “middle” ratings across items.

  2. 2

    Maintain consistency by checking whether an expert’s answers align across closely related questions.

  3. 3

    Keep interviews short (about 30–40 minutes) and questions concise by splitting long prompts into smaller parts.

  4. 4

    Build rapport and avoid hostile or frustrated behavior; experts become defensive when the interaction feels adversarial.

  5. 5

    When experts use “grey” reasoning, treat conditional answers as meaningful and ask targeted questions that clarify assumptions and boundaries.

  6. 6

    Use on-site observation to capture workflow knowledge that experts struggle to verbalize.

  7. 7

    In blackboarding, rely on a control mechanism to manage shared space and ensure every expert contributes.

Highlights

A non-hostile, patient interview style is treated as a knowledge-capture tactic: irritation can make experts defensive and reduce disclosure.
Tacit knowledge often appears as “shades of grey,” producing conditional answers that require careful handling rather than forcing binary responses.
Protocol analysis converts expert debate into scenario-based event flows, including how outcomes shift if leadership or conditions change.
Concept mapping demands explicit node-and-link definitions; abstract concepts require example-based attributes to make relationships usable.
Blackboarding works only with facilitation: the knowledge developer must control information flow so limited shared space doesn’t silence some experts.

Topics

  • Expert Interviews
  • Response Bias
  • Protocol Analysis
  • Concept Mapping
  • Blackboarding