System development: system testing and deployment
Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Validate the knowledge base structurally (organized classification, proper codification, system integration) and substantively (content relevance to employees and stakeholders).
Briefing
Testing and deployment become the make-or-break phase after a knowledge management system has already captured knowledge from databases and codified it into rules, decision trees, and other structured forms. At this stage, the system must prove it is useful in day-to-day operations—especially through user acceptance testing and training—because a poorly validated knowledge base can fail in storage, retrieval, or practical use, turning the entire effort into wasted cost.
Before any “go-live,” the system needs validation against concrete parameters. The knowledge base must be created and organized in a structured, classified format, codified properly, and actually integrated into the system. A usable user interface matters, but content relevance matters more: employees and other stakeholders must find the information applicable to their work. The system also has to align with business needs, closing the knowledge gap identified earlier and linking knowledge strategy to the organization’s business strategy.
User acceptance testing checks system behavior in a realistic environment. The ERP example from IIT Kharagpur illustrates why: an integrated enterprise resource planning system was built to replace fragmented, non-integrated systems, consolidating databases for faculty, staff, students, and even vendors and suppliers to improve efficiency and reduce cost. Early problems—information lag, missing correct inputs, or failures to deliver the right information—surfaced during acceptance testing. Feedback from each stakeholder group determined whether the system truly worked for them. If it did, rollout could expand unit by unit; if it didn’t, the system could not be considered “right.”
Testing criteria must account for the special difficulties of codifying tacit knowledge, which is subjective and easy to make incomplete. If key process details are left out, the knowledge remains hard to apply and stays effectively subjective. Reliable specifications are also required; otherwise, testing becomes arbitrary. Consistency and correction are central too: the system should not work one day and fail the next. Failures can stem from technical errors (storage/retrieval arrangements) or input errors (bad or incorrect data), so both must be checked.
A pilot-first approach is repeatedly emphasized: deploy to one unit, validate, then scale. Skipping testing for short-term speed risks long-term operational failure and unclear blame—developers, experts, or technology. Interface design is treated as layered: end users need simple access, while intermediary roles (system analysts, developers, programmers) must be able to troubleshoot when issues arise, such as connectivity problems.
Because knowledge management is cyclical—capture, codify, test, deploy, then return to the cycle—verification must be continuous. The system should be complete, confidence-building (including privacy boundaries), correct, consistent, and non-redundant. Regular updates prevent obsolete knowledge from accumulating (“garbage in, garbage out”). Logical testing also targets anomalies and errors: circular rules, redundancy, subsumption errors, inconsistent outputs, unusable knowledge, and other faults that break the logic of decision rules. Finally, acceptance testing requires a dedicated team, predefined evaluation criteria, test cases across departments, documented results, training review, and careful attention to error types (including type 1 and type 2 errors) before technical and operational requirements are met for deployment.
Cornell Notes
After codification, a knowledge management system must be validated through testing and deployment—primarily via user acceptance testing and training—so it works reliably for real stakeholders. Validation checks both structure (organized knowledge base, proper codification, integration, user interface) and substance (content relevance to employees and other stakeholders). Acceptance testing runs in realistic conditions, often starting with a pilot unit, collecting feedback, and then scaling only if outcomes meet predefined criteria for completeness, correctness, consistency, confidence, and usability. Logical testing targets rule-level problems such as circular rules, redundancy, subsumption errors, and inconsistent outputs, while operational testing checks technical reliability and smooth storage/retrieval. Because knowledge management is cyclical, testing and updates must continue to prevent obsolete or redundant knowledge from degrading performance.
What concrete checks determine whether a knowledge management system is “built right” before acceptance testing begins?
Why does user acceptance testing require realistic conditions and stakeholder feedback?
How do technical errors and input errors differ, and why does that distinction matter during testing?
What logical testing problems are specifically targeted in knowledge-based systems?
Why is pilot testing and phased deployment emphasized?
What does “confidence” in user acceptance testing include beyond simply believing the system works?
Review Questions
- Which validation parameters distinguish structural readiness (organization, codification, integration) from content readiness (relevance to stakeholders) in a knowledge management system?
- How would you design a testing plan to separate technical errors from input errors when outputs become inconsistent?
- What logical error types—such as circular rules, subsumption errors, or inconsistent knowledge—would you look for when a decision tree produces contradictory results?
Key Points
- 1
Validate the knowledge base structurally (organized classification, proper codification, system integration) and substantively (content relevance to employees and stakeholders).
- 2
Run user acceptance testing in realistic conditions with direct feedback from all stakeholder groups, not just internal developers.
- 3
Start with pilot testing in one unit, then scale deployment only after the system meets predefined criteria for completeness, correctness, consistency, and usability.
- 4
Treat consistency as a reliability requirement: the system should produce stable results over time, and failures must be traced to either technical errors or input errors.
- 5
Design user interfaces in layers so end users can access information easily while intermediary technical roles can troubleshoot issues like connectivity or access problems.
- 6
Continuously update the knowledge base to prevent redundancy and obsolescence, following the “garbage in, garbage out” principle.
- 7
Use logical testing to detect rule-level anomalies (circular rules, redundancy, subsumption errors, inconsistent outputs) and operational testing to confirm technical and operational requirements before acceptance.