Get AI summaries of any video or article — Sign up free
System development: system testing and deployment thumbnail

System development: system testing and deployment

Knowledge Management·
5 min read

Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Validate the knowledge base structurally (organized classification, proper codification, system integration) and substantively (content relevance to employees and stakeholders).

Briefing

Testing and deployment become the make-or-break phase after a knowledge management system has already captured knowledge from databases and codified it into rules, decision trees, and other structured forms. At this stage, the system must prove it is useful in day-to-day operations—especially through user acceptance testing and training—because a poorly validated knowledge base can fail in storage, retrieval, or practical use, turning the entire effort into wasted cost.

Before any “go-live,” the system needs validation against concrete parameters. The knowledge base must be created and organized in a structured, classified format, codified properly, and actually integrated into the system. A usable user interface matters, but content relevance matters more: employees and other stakeholders must find the information applicable to their work. The system also has to align with business needs, closing the knowledge gap identified earlier and linking knowledge strategy to the organization’s business strategy.

User acceptance testing checks system behavior in a realistic environment. The ERP example from IIT Kharagpur illustrates why: an integrated enterprise resource planning system was built to replace fragmented, non-integrated systems, consolidating databases for faculty, staff, students, and even vendors and suppliers to improve efficiency and reduce cost. Early problems—information lag, missing correct inputs, or failures to deliver the right information—surfaced during acceptance testing. Feedback from each stakeholder group determined whether the system truly worked for them. If it did, rollout could expand unit by unit; if it didn’t, the system could not be considered “right.”

Testing criteria must account for the special difficulties of codifying tacit knowledge, which is subjective and easy to make incomplete. If key process details are left out, the knowledge remains hard to apply and stays effectively subjective. Reliable specifications are also required; otherwise, testing becomes arbitrary. Consistency and correction are central too: the system should not work one day and fail the next. Failures can stem from technical errors (storage/retrieval arrangements) or input errors (bad or incorrect data), so both must be checked.

A pilot-first approach is repeatedly emphasized: deploy to one unit, validate, then scale. Skipping testing for short-term speed risks long-term operational failure and unclear blame—developers, experts, or technology. Interface design is treated as layered: end users need simple access, while intermediary roles (system analysts, developers, programmers) must be able to troubleshoot when issues arise, such as connectivity problems.

Because knowledge management is cyclical—capture, codify, test, deploy, then return to the cycle—verification must be continuous. The system should be complete, confidence-building (including privacy boundaries), correct, consistent, and non-redundant. Regular updates prevent obsolete knowledge from accumulating (“garbage in, garbage out”). Logical testing also targets anomalies and errors: circular rules, redundancy, subsumption errors, inconsistent outputs, unusable knowledge, and other faults that break the logic of decision rules. Finally, acceptance testing requires a dedicated team, predefined evaluation criteria, test cases across departments, documented results, training review, and careful attention to error types (including type 1 and type 2 errors) before technical and operational requirements are met for deployment.

Cornell Notes

After codification, a knowledge management system must be validated through testing and deployment—primarily via user acceptance testing and training—so it works reliably for real stakeholders. Validation checks both structure (organized knowledge base, proper codification, integration, user interface) and substance (content relevance to employees and other stakeholders). Acceptance testing runs in realistic conditions, often starting with a pilot unit, collecting feedback, and then scaling only if outcomes meet predefined criteria for completeness, correctness, consistency, confidence, and usability. Logical testing targets rule-level problems such as circular rules, redundancy, subsumption errors, and inconsistent outputs, while operational testing checks technical reliability and smooth storage/retrieval. Because knowledge management is cyclical, testing and updates must continue to prevent obsolete or redundant knowledge from degrading performance.

What concrete checks determine whether a knowledge management system is “built right” before acceptance testing begins?

The knowledge base must be created and organized in a structured, classified format, codified properly, and actually put into the system. Storage and retrieval depend on correct integration, and a good user interface is required so stakeholders can access what they need. The most decisive criterion is content relevance: the knowledge base must match what employees and other stakeholders consider useful for their work. If these foundations are weak, failures can show up later as storage/retrieval problems or poor practical use.

Why does user acceptance testing require realistic conditions and stakeholder feedback?

User acceptance testing verifies system behavior in the environment where it will operate, not in isolation. Feedback must come from the full set of stakeholders—faculty, staff, students, and even external parties like vendors/suppliers—because each group experiences different information needs. The IIT Kharagpur ERP example highlights early failures such as information lag or incorrect inputs; only stakeholder feedback could confirm whether the system delivered the right information reliably enough to justify broader deployment.

How do technical errors and input errors differ, and why does that distinction matter during testing?

Technical errors relate to how the deployed system handles storage and retrieval—whether the system’s technical arrangement supports correct access and application. Input errors come from the data side: incorrect or missing information fed into the knowledge base. Testing must isolate which category is causing inconsistent or wrong outputs; otherwise, fixes may target the wrong layer (technology vs. data quality).

What logical testing problems are specifically targeted in knowledge-based systems?

Logical testing looks for rule-level anomalies and data-rule faults, including circular rules (contradictory logic that loops), redundancy (duplicate or obsolete knowledge), subsumption errors (assuming one rule implies another when they don’t truly match), and inconsistent knowledge (same inputs producing different results). It also flags unusable knowledge—content that depends on conditions that never succeed/fail in practice, leaving no meaningful success criteria for users.

Why is pilot testing and phased deployment emphasized?

Pilot testing starts with one unit or setup to confirm the system works under real usage conditions. If it performs well, deployment expands to additional units; if it fails, the cycle returns to knowledge capture and codification to locate the fault. This approach reduces the risk of organization-wide failure and avoids wasting resources when the system’s reliability, usability, or logic is still uncertain.

What does “confidence” in user acceptance testing include beyond simply believing the system works?

Confidence includes training so users know how to operate the system and trust it in practice. It also includes privacy expectations: users should access only the data permitted for their role, with sensitive information not exposed to others except at higher/intermediary levels. Confidence is therefore both operational (training and usability) and governance-related (secrecy and access boundaries).

Review Questions

  1. Which validation parameters distinguish structural readiness (organization, codification, integration) from content readiness (relevance to stakeholders) in a knowledge management system?
  2. How would you design a testing plan to separate technical errors from input errors when outputs become inconsistent?
  3. What logical error types—such as circular rules, subsumption errors, or inconsistent knowledge—would you look for when a decision tree produces contradictory results?

Key Points

  1. 1

    Validate the knowledge base structurally (organized classification, proper codification, system integration) and substantively (content relevance to employees and stakeholders).

  2. 2

    Run user acceptance testing in realistic conditions with direct feedback from all stakeholder groups, not just internal developers.

  3. 3

    Start with pilot testing in one unit, then scale deployment only after the system meets predefined criteria for completeness, correctness, consistency, and usability.

  4. 4

    Treat consistency as a reliability requirement: the system should produce stable results over time, and failures must be traced to either technical errors or input errors.

  5. 5

    Design user interfaces in layers so end users can access information easily while intermediary technical roles can troubleshoot issues like connectivity or access problems.

  6. 6

    Continuously update the knowledge base to prevent redundancy and obsolescence, following the “garbage in, garbage out” principle.

  7. 7

    Use logical testing to detect rule-level anomalies (circular rules, redundancy, subsumption errors, inconsistent outputs) and operational testing to confirm technical and operational requirements before acceptance.

Highlights

User acceptance testing is the checkpoint that determines whether a codified knowledge system actually works for each stakeholder group—faculty, staff, students, and even vendors/suppliers in the ERP example.
Testing must distinguish technical errors (storage/retrieval arrangements) from input errors (incorrect data fed into the system), since both can produce wrong or inconsistent outputs.
Because knowledge management is cyclical, verification and updates must continue to prevent obsolete or redundant knowledge from degrading reliability and usefulness.
Logical testing targets specific rule failures—circular rules, subsumption errors, and inconsistent knowledge—rather than relying on vague “it seems fine” judgments.

Topics

  • Knowledge Management System
  • User Acceptance Testing
  • Pilot Deployment
  • Logical Testing
  • ERP Integration

Mentioned

  • ERP