Get AI summaries of any video or article — Sign up free
Developing the KM system thumbnail

Developing the KM system

6 min read

Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Design KM as layered architecture so interface, security, filtering, applications, transport, integration, and storage work together rather than in isolation.

Briefing

Building a knowledge management (KM) system means assembling multiple layers so users can reliably reach content, security stays intact, and information is filtered into something useful—not just stored. The interface layer sits at the front line, acting as the main contact point between people and KM content. It supports both tacit knowledge (personal know-how passed between individuals) and explicit knowledge (documented information stored in repositories). Transfers of tacit knowledge are described as internalization—knowledge moves from one person to another so the recipient can apply it—while explicit knowledge transfer is framed as externalization, where documented material in the system is used by others.

A key complication is that repositories often hold knowledge without context. Code or a program may be available, but the “when, why, and how it should be used” may not be captured. The transcript contrasts this context gap with tacit transfers, which can carry richer situational meaning through conversation, whiteboards, or other informal mechanisms. Even when knowledge is formally transferred, it can fall somewhere between “no context” and “full context,” depending on how it’s packaged.

On the technical side, the interface layer relies on web access—web browsers act as the access channel for KM content deposited in repositories. Performance optimization matters: network conditions, server capacity, and media handling (text, graphics, video) must be tuned so users can retrieve content without access failures. The system also needs cross-platform consistency. HTML is highlighted as a universal format so content created on Windows, Android-based platforms, or Apple-based platforms remains viewable across environments.

Next comes the access and authentication layer, which controls who can see or modify what. Access privileges are assigned by organizational level, with examples ranging from read-only access for lower-level users to editing and modification rights for senior managers. Security mechanisms include firewalls between extranet and intranet to reduce virus and intrusion risk, plus backups and mirrored storage so hacked or failed repositories can be restored. Virtual private networks (VPNs) are presented as secure point-to-point communication lines for privileged or copyrighted organizational information.

The transcript then lists common network and security standards and techniques: lightweight directory access protocol (LDAP), analyzer and layer plan (as named in the transcript), point-to-point tunneling protocol (PPTP), secure email using certificate-based encryption, and virtual cards for contact storage and presentation. It also points to biometrics—face, voice, and fingerprint recognition—as an authentication method to prevent unauthorized access.

Once users are authenticated, the collaborative filtering and intelligence layer determines what they actually see. It uses text attributes and knowledge elements—via automated or manual procedures—to filter out irrelevant results and surface relevant content quickly. The system’s knowledge structures can be static (video, sound, and other media that don’t change) or dynamic (especially text documents that can be updated). Navigation quality matters: static structures can trap users if links don’t lead to further information, while dynamic, collaborative authoring aims to keep pathways open.

To broaden what’s available without storing everything locally, the transcript introduces virtual folders—accessing external repositories through navigation, metadata search, or subscription-based databases. It also emphasizes automatic full-text indexing for fast retrieval, and meta tagging to attach document context such as author, modification dates, reviewers, and file size. Client-server architecture is contrasted with agent computing: mobile agents reduce network load by moving computation closer to data, enabling real-time, autonomous, protocol-following behavior.

Finally, the remaining layers connect the system end-to-end: the application layer provides tools like directories, yellow pages, collaborative tools, and video conferencing; the transport layer handles network communication and streaming support; middleware and legacy integration bridge older backend systems with new KM formats; and the repository layer anchors everything in databases, archives, legacy documents, and media. The overall takeaway is that KM effectiveness depends on tight integration across all layers—from user interface to storage—so knowledge is accessible, secure, and meaningfully filtered rather than merely collected.

Cornell Notes

A KM system works best when it’s built as an integrated set of layers rather than a single database or portal. The interface layer is the user’s entry point and supports both tacit knowledge transfer (internalization between people) and explicit knowledge use (externalization from documented repositories). Access and authentication then restricts permissions by role, using tools like firewalls, backups, VPNs, and biometrics to protect privileged information. After authentication, collaborative filtering and intelligence uses attributes, meta tagging, and indexing to surface relevant content quickly, while virtual folders and subscriptions extend access beyond local storage. The remaining layers—application, transport, middleware/legacy integration, and repository—connect tools, networks, old systems, and the underlying archives so knowledge can be retrieved and used reliably.

Why does the interface layer matter more than just “where users click”?

It’s the primary contact point between people and KM content, and it’s where tacit and explicit knowledge meet. The transcript frames tacit transfer as internalization (knowledge moving from one person to another so the recipient can apply it) and explicit transfer as externalization (documented knowledge in the system being used by others). It also highlights a practical issue: repositories often store knowledge without context, so the interface layer must support access paths that preserve or supplement context—especially when knowledge is passed through informal mechanisms like conversation or whiteboards.

What problem does the access and authentication layer solve, and how is it implemented?

It prevents unauthorized access and controls what users can do with KM content. Permissions are assigned by organizational level (e.g., lower-level users may read only part of the data, while top management can access different datasets and can edit or modify). Security implementation includes firewalls between extranet and intranet, backups and mirrored storage for recovery, and VPNs for secure communication. The transcript also names certificate-based encryption for secure email and biometrics (face/voice/fingerprint recognition) to block access by the wrong person.

How does collaborative filtering and intelligence turn “stored knowledge” into “relevant results”?

It filters content using text attributes and knowledge elements, using automated or manual procedures to remove irrelevant information. The goal is fast retrieval through multiple search mechanisms, so users get what matches their task rather than everything in the repository. It’s also tied to how knowledge is structured: static structures (like video or sound) don’t change, while dynamic structures (especially text) can be updated, affecting how navigation and updates work.

What’s the difference between static and dynamic knowledge structures, and why does navigation quality matter?

Static structures keep the nature of the document unchanged (the transcript notes video and sound as static), so users typically navigate via hyperlinks but can get stuck if links don’t lead to deeper material. Dynamic structures allow changes—text files can be updated—supporting collaborative authoring and more continuous pathways between related items. The transcript stresses avoiding “dead ends” where users reach a point with no further links or information.

Why introduce virtual folders instead of forcing every organization to store everything locally?

Virtual folders let users access external repositories as needed through navigation, metadata search, or subscription-based databases. This avoids the risk of becoming “useless” when local repositories are limited, and it reduces the effort of adding and maintaining every dataset internally. The transcript’s example is that subscription databases (e.g., an XCode database mentioned) may not provide data freely; users must search metadata and then access the relevant content.

How do mobile agents reduce network load compared with a basic client-server setup?

In a client-server model, network load is primarily between the client and the server. With agent computing, the transcript describes shifting load into the space between the agent server and the client, reducing direct network activity. Mobile agents can move across multiple hosts, transferring code as needed, and they act in real time with reduced network traffic. Because they’re automated, they follow protocols and can operate asynchronously and autonomously without constant user effort.

Review Questions

  1. What trade-off does the transcript highlight between knowledge stored in repositories and knowledge that includes context, and how does that affect tacit vs explicit transfer?
  2. How do access privileges and authentication mechanisms work together to protect KM content, and what examples of each are given?
  3. In what ways do static and dynamic knowledge structures change user navigation and the ability to update information?

Key Points

  1. 1

    Design KM as layered architecture so interface, security, filtering, applications, transport, integration, and storage work together rather than in isolation.

  2. 2

    Treat tacit knowledge transfer (internalization between people) differently from explicit knowledge use (externalization from documented repositories).

  3. 3

    Plan for context loss in repositories: content may exist without the situational context needed for correct application.

  4. 4

    Implement role-based access controls (read/edit/delete) and protect systems with firewalls, backups, mirrored storage, and VPNs.

  5. 5

    Use collaborative filtering and intelligence—via text attributes, meta tagging, and full-text indexing—to surface relevant information quickly.

  6. 6

    Support cross-platform access by standardizing content formats, with HTML called out as a universal web language.

  7. 7

    Bridge old and new systems through middleware/legacy integration so existing backend data can feed the KM repository layer.

Highlights

Repositories can store knowledge without context, leaving users with information that may not explain when or how it should be applied.
Access control is role-based, ranging from read-only access for lower-level users to editing and modification rights for senior managers.
Collaborative filtering uses attributes and knowledge elements to filter out irrelevant results before users see content.
Virtual folders extend KM reach through metadata search and subscription-based external repositories without requiring local storage of everything.
Mobile agents reduce network load by moving computation across hosts and operating autonomously in real time.

Topics

  • KM Layered Architecture
  • Interface and Context
  • Access Control and VPN
  • Collaborative Filtering
  • Mobile Agents and Indexing

Mentioned

  • KM
  • VPN
  • LDAP
  • PPTP
  • HTML