Metrics
Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Benchmarking should be tied to specific goals and objectives; comparing to averages is not the same as benchmarking for improvement.
Briefing
Benchmarking turns goals into measurable standards—then tests whether performance beats those targets, not just whether it looks “average.” The core idea is that organizations should set benchmarks that matter: quality of service, customer satisfaction, product lifecycle outcomes, and cost/benefit from investments in IT and knowledge management systems. Benchmarks can be standardized, but the comparison should be intentional—against internal goals, industry norms, or direct competitors—because “average performance” is not the same thing as a benchmark designed to drive improvement or a higher level of performance.
Benchmarking can be done in several directions. Internally, firms compare units and divisions against each other’s targets, while avoiding internal barriers that would distort results. Externally, organizations may benchmark against competitors, but legal and practical constraints often require third-party consultants to validate comparisons. In that competitive framing, benchmarking also becomes a way to assess knowledge strength—what knowledge assets competitors have and whether an organization can compete using its own knowledge base.
Industry benchmarks provide another reference point. The transcript uses IT attrition as an example: if the industry standard sits around 20–23%, an organization might set a stricter benchmark such as “no more than 15%,” then build systems to achieve it and evaluate whether the target is met. Across industries, benchmarking can still yield insights—especially when some firms participate and others don’t—so the organization can judge where it stands relative to both peers and non-peers, while weighing the cost of participation against expected benefits.
A key lesson is that benchmarks must create competitive advantage by being hard to copy or substitute. If competitors can easily match the benchmark, the playing field levels out and the benchmark loses value. The transcript illustrates this with early ATM deployment: once the technology spread, other banks could compete using similar systems, eliminating the initial edge. The same logic applies to knowledge management: benchmarks should target critical, valuable activities where competitors struggle to imitate, and where internal systems can’t simply be replicated into parity.
The process of benchmarking is described as five stages. First, decide what to benchmark—process, product, or services—and define the scope and rationale (including whether the target is rare and difficult to substitute). Second, assemble the team, allocate resources, and include the right customers (internal employees for HR, external customers for service delivery), while securing top management support and budget. Third, set the benchmark targets and partners, factoring in time, budget, and information availability. Fourth, collect and analyze data—such as competitor pricing for logistics or supply-chain services—then assess whether performance is ahead or behind. Finally, use feedback to judge whether the benchmark effort worked; if not, revisit targets and partners and repeat.
Beyond benchmarking, the transcript outlines two additional measurement approaches for knowledge management effectiveness. Quality Function Deployment (linked to the “House of Quality”) maps desired outcomes—knowledge creation, knowledge sharing, faster problem solving, cost and quality improvements, and customer satisfaction—against performance results, using ranked priorities and correlation between goals and outcomes. Balanced Scorecard, developed by Kaplan and Norton, adds four perspectives—financial, customer, internal business processes, and learning/growth—then translates knowledge management vision into measurable objectives, targets, and initiatives. The transcript also references Skandia’s intellectual capital “Navigator,” which tracks human, structural/organizational, customer, and organizational capital, emphasizing whether intellectual capital is appreciating and adding value.
The closing recommendations stress practical discipline: establish a baseline, use both qualitative and quantitative measures, avoid too many or uncontrollable metrics, reward participants appropriately, and keep claims conservative while setting achievable goals. The overall message is that knowledge management success should be measured with standards that drive real outcomes—cost, quality, customer value, and intellectual capital—not vague activity counts.
Cornell Notes
Benchmarking converts knowledge management goals into measurable standards and tests whether performance beats targets rather than merely matching averages. It can be internal (comparing units), industry-based (using norms like IT attrition rates), or competitor-based (often requiring third-party validation due to legal constraints). The transcript emphasizes choosing benchmarks that are critical and hard to copy, since easily imitated benchmarks erase competitive advantage. For measuring knowledge management effectiveness beyond benchmarking, it outlines Quality Function Deployment (“House of Quality”) to link prioritized outcomes (knowledge creation, sharing, faster problem solving, cost/quality, customer satisfaction) to performance results, and Balanced Scorecard to track financial, customer, internal process, and learning/growth perspectives. Both approaches aim to connect knowledge initiatives to measurable business outcomes and intellectual capital value.
How is benchmarking different from simply comparing to an average performance level?
Why does the transcript warn that benchmarks must be difficult to copy or substitute?
What are the five stages of the benchmarking process described?
How does Quality Function Deployment (“House of Quality”) measure knowledge management effectiveness?
What does Balanced Scorecard add to knowledge management measurement?
Review Questions
- What criteria should guide the choice of a benchmarking target so it creates competitive advantage rather than parity?
- In the five-stage benchmarking cycle, what triggers repeating earlier stages, and what changes when repeating?
- How do Quality Function Deployment and Balanced Scorecard differ in what they measure and how they connect knowledge initiatives to business outcomes?
Key Points
- 1
Benchmarking should be tied to specific goals and objectives; comparing to averages is not the same as benchmarking for improvement.
- 2
Choose benchmarks that are critical to the organization and hard for competitors to copy or substitute, otherwise the advantage fades.
- 3
Benchmarking can be internal, industry-based, or competitor-based; competitor benchmarking may require third-party consultants to handle legal and practical issues.
- 4
The benchmarking workflow follows five stages: select what to benchmark, form the team and secure support, set targets/partners, collect and analyze data, then use feedback to decide whether to repeat.
- 5
Quality Function Deployment (“House of Quality”) measures knowledge management by linking prioritized outcomes (knowledge creation/sharing, problem-solving speed, cost/quality, customer satisfaction) to achieved performance.
- 6
Balanced Scorecard measures knowledge management through four linked perspectives—financial, customer, internal processes, and learning/growth—so knowledge initiatives map to measurable outcomes.
- 7
Knowledge management assessment should use a baseline, a limited set of controllable metrics, conservative claims, and appropriate rewards for participants.