Developing the KM system
Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Design KM as layered architecture so interface, security, filtering, applications, transport, integration, and storage work together rather than in isolation.
Briefing
Building a knowledge management (KM) system means assembling multiple layers so users can reliably reach content, security stays intact, and information is filtered into something useful—not just stored. The interface layer sits at the front line, acting as the main contact point between people and KM content. It supports both tacit knowledge (personal know-how passed between individuals) and explicit knowledge (documented information stored in repositories). Transfers of tacit knowledge are described as internalization—knowledge moves from one person to another so the recipient can apply it—while explicit knowledge transfer is framed as externalization, where documented material in the system is used by others.
A key complication is that repositories often hold knowledge without context. Code or a program may be available, but the “when, why, and how it should be used” may not be captured. The transcript contrasts this context gap with tacit transfers, which can carry richer situational meaning through conversation, whiteboards, or other informal mechanisms. Even when knowledge is formally transferred, it can fall somewhere between “no context” and “full context,” depending on how it’s packaged.
On the technical side, the interface layer relies on web access—web browsers act as the access channel for KM content deposited in repositories. Performance optimization matters: network conditions, server capacity, and media handling (text, graphics, video) must be tuned so users can retrieve content without access failures. The system also needs cross-platform consistency. HTML is highlighted as a universal format so content created on Windows, Android-based platforms, or Apple-based platforms remains viewable across environments.
Next comes the access and authentication layer, which controls who can see or modify what. Access privileges are assigned by organizational level, with examples ranging from read-only access for lower-level users to editing and modification rights for senior managers. Security mechanisms include firewalls between extranet and intranet to reduce virus and intrusion risk, plus backups and mirrored storage so hacked or failed repositories can be restored. Virtual private networks (VPNs) are presented as secure point-to-point communication lines for privileged or copyrighted organizational information.
The transcript then lists common network and security standards and techniques: lightweight directory access protocol (LDAP), analyzer and layer plan (as named in the transcript), point-to-point tunneling protocol (PPTP), secure email using certificate-based encryption, and virtual cards for contact storage and presentation. It also points to biometrics—face, voice, and fingerprint recognition—as an authentication method to prevent unauthorized access.
Once users are authenticated, the collaborative filtering and intelligence layer determines what they actually see. It uses text attributes and knowledge elements—via automated or manual procedures—to filter out irrelevant results and surface relevant content quickly. The system’s knowledge structures can be static (video, sound, and other media that don’t change) or dynamic (especially text documents that can be updated). Navigation quality matters: static structures can trap users if links don’t lead to further information, while dynamic, collaborative authoring aims to keep pathways open.
To broaden what’s available without storing everything locally, the transcript introduces virtual folders—accessing external repositories through navigation, metadata search, or subscription-based databases. It also emphasizes automatic full-text indexing for fast retrieval, and meta tagging to attach document context such as author, modification dates, reviewers, and file size. Client-server architecture is contrasted with agent computing: mobile agents reduce network load by moving computation closer to data, enabling real-time, autonomous, protocol-following behavior.
Finally, the remaining layers connect the system end-to-end: the application layer provides tools like directories, yellow pages, collaborative tools, and video conferencing; the transport layer handles network communication and streaming support; middleware and legacy integration bridge older backend systems with new KM formats; and the repository layer anchors everything in databases, archives, legacy documents, and media. The overall takeaway is that KM effectiveness depends on tight integration across all layers—from user interface to storage—so knowledge is accessible, secure, and meaningfully filtered rather than merely collected.
Cornell Notes
A KM system works best when it’s built as an integrated set of layers rather than a single database or portal. The interface layer is the user’s entry point and supports both tacit knowledge transfer (internalization between people) and explicit knowledge use (externalization from documented repositories). Access and authentication then restricts permissions by role, using tools like firewalls, backups, VPNs, and biometrics to protect privileged information. After authentication, collaborative filtering and intelligence uses attributes, meta tagging, and indexing to surface relevant content quickly, while virtual folders and subscriptions extend access beyond local storage. The remaining layers—application, transport, middleware/legacy integration, and repository—connect tools, networks, old systems, and the underlying archives so knowledge can be retrieved and used reliably.
Why does the interface layer matter more than just “where users click”?
What problem does the access and authentication layer solve, and how is it implemented?
How does collaborative filtering and intelligence turn “stored knowledge” into “relevant results”?
What’s the difference between static and dynamic knowledge structures, and why does navigation quality matter?
Why introduce virtual folders instead of forcing every organization to store everything locally?
How do mobile agents reduce network load compared with a basic client-server setup?
Review Questions
- What trade-off does the transcript highlight between knowledge stored in repositories and knowledge that includes context, and how does that affect tacit vs explicit transfer?
- How do access privileges and authentication mechanisms work together to protect KM content, and what examples of each are given?
- In what ways do static and dynamic knowledge structures change user navigation and the ability to update information?
Key Points
- 1
Design KM as layered architecture so interface, security, filtering, applications, transport, integration, and storage work together rather than in isolation.
- 2
Treat tacit knowledge transfer (internalization between people) differently from explicit knowledge use (externalization from documented repositories).
- 3
Plan for context loss in repositories: content may exist without the situational context needed for correct application.
- 4
Implement role-based access controls (read/edit/delete) and protect systems with firewalls, backups, mirrored storage, and VPNs.
- 5
Use collaborative filtering and intelligence—via text attributes, meta tagging, and full-text indexing—to surface relevant information quickly.
- 6
Support cross-platform access by standardizing content formats, with HTML called out as a universal web language.
- 7
Bridge old and new systems through middleware/legacy integration so existing backend data can feed the KM repository layer.