Get AI summaries of any video or article — Sign up free
System development: system testing and deployment  (cond.) thumbnail

System development: system testing and deployment (cond.)

Knowledge Management·
5 min read

Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat testing as a dual proof: knowledge validity (complete, correct, accurate, non-duplicative, not outdated) and system functionality (end users can actually use it).

Briefing

User acceptance testing and evaluation criteria come first, but the real through-line is how a knowledge management system earns trust before it ever goes live—and how it stays useful after deployment. Testing is treated as a gatekeeping phase: the system must prove both that the knowledge content is valid (complete, correct, accurate, non-duplicative, not outdated) and that the system is functional for day-to-day work. That dual focus matters because a knowledge base can look complete on paper while still failing in practice if end users can’t retrieve it, understand it, or apply it effectively.

During the testing phase, teams must answer practical questions about what to evaluate and how. Evaluation can happen in the lab or in the field, and it must cover both knowledge content (completeness, correctness, accuracy, validity) and functionality (whether end users can find it useful). Stakeholders are not limited to end users: functional heads, suppliers and vendors, knowledge developers, and domain experts all play roles. Experts validate the codified form of tacit knowledge—checking whether what lives in people’s heads has been captured accurately in explicit form—while users confirm whether the knowledge matches what they actually need.

Testing also requires criteria set in advance and feedback captured in recorded form from multiple groups: knowledge developers, experts, end users, and system maintainers. Training and readiness are part of the evaluation too; the system’s value depends on whether people can use it correctly. To strengthen confidence, the process uses triangulation—bringing together experts, developers, and users to confirm that knowledge is both verified and operationally usable.

The transcript emphasizes error management using “type 1 and type 2 errors” logic: avoid accepting incorrect knowledge as valid (a false positive) and avoid rejecting knowledge that should have been included (a false negative). Minimizing both kinds of mistakes is framed as essential so the knowledge base supports inclusion/exclusion decisions that won’t later undermine quality.

After testing, deployment shifts to selecting the right knowledge-based problems—identifying who the users are, what problems the knowledge base will solve, and how the system’s repository, retrieval, and use will fit into real workflows. Ease of understanding, integration across information sources, and maintenance (security, privacy, uptime, and technical reliability) are treated as deployment prerequisites.

Organizational factors determine whether the system survives contact with employees. Leadership support is repeatedly highlighted, including dedicated roles such as a chief knowledge officer reporting to the CEO, plus funding, training quality, and ongoing upgrades. Resistance must be reduced through culture-building, “knowledge champions” who advocate and communicate benefits, and reward systems that encourage sharing rather than hoarding. The transcript also warns about political friction, union concerns, and behavioral blockers like knowledge hoarders and “troublemakers.”

Finally, post-implementation review closes the loop: teams assess whether people actually use the system, whether decision quality improves, whether costs of knowledge processing stay justified, and whether accuracy and timeliness remain strong. Internal and external factors—people, organizational climate, and technology infrastructure—are presented as the framework that shapes overall knowledge-based system quality. A concrete payoff example is given: Schlumberger reportedly saved up to $1 billion annually through knowledge management, underscoring the cost-benefit logic behind deployment decisions.

Cornell Notes

Testing is the credibility checkpoint for a knowledge management system: knowledge must be valid (complete, correct, accurate, non-duplicative, not outdated) and the system must be functional for end users. Evaluation happens in lab and field settings, with clear criteria set in advance and feedback recorded from knowledge developers, experts, end users, and maintainers. The process uses triangulation and training readiness checks, while minimizing type 1 and type 2 errors to avoid false acceptance of bad knowledge or false rejection of useful knowledge. Deployment then focuses on matching the knowledge base to real user problems, ensuring ease of understanding, integration, and ongoing maintenance. After launch, post-implementation review measures usage, decision quality, cost versus benefit, and whether accuracy and timeliness keep improving.

What two broad criteria determine whether a knowledge management system is ready to be put into operational use?

Readiness hinges on (1) knowledge validity—completeness, correctness, accuracy, and whether the content is valid rather than duplicated, outdated, or redundant—and (2) functionality—whether end users can find and use the knowledge effectively for their work.

Why does the transcript treat stakeholder selection as part of testing, not just a project-management detail?

Testing quality depends on who validates what. End users confirm usefulness in real tasks. Domain experts validate that tacit knowledge has been codified correctly into explicit form. Knowledge developers and other stakeholders (functional heads, suppliers/vendors, system maintainers) contribute feedback on structure, retrieval, and operational fit, ensuring the knowledge base is both verified and usable.

What questions must be answered during the testing phase?

Teams must decide what to evaluate (functionality vs. completeness/correctness/accuracy of data), where to evaluate it (lab vs. field), and who performs logical and user acceptance testing. They also need advance criteria for evaluation and a plan for collecting recorded feedback from stakeholders.

How do type 1 and type 2 errors map onto knowledge base decisions?

Type 1 error corresponds to accepting a hypothesis (treating knowledge as valid) even though it is false—meaning incorrect knowledge gets included. Type 2 error corresponds to rejecting a hypothesis (excluding knowledge) even though it is true—meaning useful knowledge is left out. The testing phase aims to minimize both so inclusion/exclusion decisions don’t later harm decision quality.

What deployment prerequisites are emphasized beyond simply “going live”?

Deployment requires identifying the knowledge-based problems the system will solve, selecting users and use cases, ensuring ease of understanding (where knowledge is stored, how it’s retrieved, how it’s applied), supporting knowledge transfer skills, integrating knowledge from multiple sources, and maintaining the system (security, privacy/confidentiality, virus-free operation, and technical reliability).

What does post-implementation review measure to judge whether the system is actually delivering value?

It checks whether people use the system, whether decision quality improves, and whether outcomes improve through better processes (e.g., recruitment decisions using knowledge maps). It also revisits accuracy and timeliness of decisions, monitors the cost of knowledge processing versus benefits, and keeps users informed so attitudes remain favorable during ongoing upgrades.

Review Questions

  1. Which stakeholder groups validate which aspects of knowledge quality, and how does that reduce the risk of both false inclusion and false exclusion?
  2. How should evaluation criteria differ when the focus is knowledge validity (completeness/correctness/accuracy) versus system functionality (end-user usefulness)?
  3. What organizational mechanisms—leadership support, champions, rewards, and culture—are described as necessary to reduce resistance after deployment?

Key Points

  1. 1

    Treat testing as a dual proof: knowledge validity (complete, correct, accurate, non-duplicative, not outdated) and system functionality (end users can actually use it).

  2. 2

    Set evaluation criteria in advance and collect recorded feedback from multiple stakeholder groups, including experts, developers, end users, and maintainers.

  3. 3

    Use lab and field evaluation to confirm both codified knowledge quality and real-world usability before operational rollout.

  4. 4

    Minimize type 1 and type 2 errors to avoid including incorrect knowledge or excluding knowledge that should be part of the system.

  5. 5

    During deployment, match the knowledge base to specific user problems and ensure ease of understanding, integration across sources, and ongoing maintenance (security, privacy, reliability).

  6. 6

    Reduce resistance through leadership support, training, knowledge champions, and reward systems that encourage sharing rather than hoarding.

  7. 7

    After launch, run post-implementation reviews that measure usage, decision quality improvements, cost-benefit balance, and ongoing accuracy/timeliness.

Highlights

Testing requires proving both content quality and operational usefulness: completeness/correctness/accuracy on one side, end-user functionality on the other.
Triangulation—experts, knowledge developers, and users validating together—helps confirm that tacit knowledge has been codified properly and can be applied.
Type 1/type 2 error thinking is used to frame knowledge inclusion/exclusion mistakes as false positives and false negatives.
Deployment success depends on problem selection and organizational readiness: ease of understanding, integration, maintenance, and leadership support.
Post-implementation review focuses on real outcomes—usage, decision quality, cost versus benefit, and whether accuracy and timeliness remain strong.

Topics

  • User Acceptance Testing
  • Knowledge Validity
  • Deployment Readiness
  • Knowledge Champions
  • Post-Implementation Review

Mentioned

  • Schlumberger