Get AI summaries of any video or article — Sign up free
Metrics thumbnail

Metrics

6 min read

Based on Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Benchmarking should be tied to specific goals and objectives; comparing to averages is not the same as benchmarking for improvement.

Briefing

Benchmarking turns goals into measurable standards—then tests whether performance beats those targets, not just whether it looks “average.” The core idea is that organizations should set benchmarks that matter: quality of service, customer satisfaction, product lifecycle outcomes, and cost/benefit from investments in IT and knowledge management systems. Benchmarks can be standardized, but the comparison should be intentional—against internal goals, industry norms, or direct competitors—because “average performance” is not the same thing as a benchmark designed to drive improvement or a higher level of performance.

Benchmarking can be done in several directions. Internally, firms compare units and divisions against each other’s targets, while avoiding internal barriers that would distort results. Externally, organizations may benchmark against competitors, but legal and practical constraints often require third-party consultants to validate comparisons. In that competitive framing, benchmarking also becomes a way to assess knowledge strength—what knowledge assets competitors have and whether an organization can compete using its own knowledge base.

Industry benchmarks provide another reference point. The transcript uses IT attrition as an example: if the industry standard sits around 20–23%, an organization might set a stricter benchmark such as “no more than 15%,” then build systems to achieve it and evaluate whether the target is met. Across industries, benchmarking can still yield insights—especially when some firms participate and others don’t—so the organization can judge where it stands relative to both peers and non-peers, while weighing the cost of participation against expected benefits.

A key lesson is that benchmarks must create competitive advantage by being hard to copy or substitute. If competitors can easily match the benchmark, the playing field levels out and the benchmark loses value. The transcript illustrates this with early ATM deployment: once the technology spread, other banks could compete using similar systems, eliminating the initial edge. The same logic applies to knowledge management: benchmarks should target critical, valuable activities where competitors struggle to imitate, and where internal systems can’t simply be replicated into parity.

The process of benchmarking is described as five stages. First, decide what to benchmark—process, product, or services—and define the scope and rationale (including whether the target is rare and difficult to substitute). Second, assemble the team, allocate resources, and include the right customers (internal employees for HR, external customers for service delivery), while securing top management support and budget. Third, set the benchmark targets and partners, factoring in time, budget, and information availability. Fourth, collect and analyze data—such as competitor pricing for logistics or supply-chain services—then assess whether performance is ahead or behind. Finally, use feedback to judge whether the benchmark effort worked; if not, revisit targets and partners and repeat.

Beyond benchmarking, the transcript outlines two additional measurement approaches for knowledge management effectiveness. Quality Function Deployment (linked to the “House of Quality”) maps desired outcomes—knowledge creation, knowledge sharing, faster problem solving, cost and quality improvements, and customer satisfaction—against performance results, using ranked priorities and correlation between goals and outcomes. Balanced Scorecard, developed by Kaplan and Norton, adds four perspectives—financial, customer, internal business processes, and learning/growth—then translates knowledge management vision into measurable objectives, targets, and initiatives. The transcript also references Skandia’s intellectual capital “Navigator,” which tracks human, structural/organizational, customer, and organizational capital, emphasizing whether intellectual capital is appreciating and adding value.

The closing recommendations stress practical discipline: establish a baseline, use both qualitative and quantitative measures, avoid too many or uncontrollable metrics, reward participants appropriately, and keep claims conservative while setting achievable goals. The overall message is that knowledge management success should be measured with standards that drive real outcomes—cost, quality, customer value, and intellectual capital—not vague activity counts.

Cornell Notes

Benchmarking converts knowledge management goals into measurable standards and tests whether performance beats targets rather than merely matching averages. It can be internal (comparing units), industry-based (using norms like IT attrition rates), or competitor-based (often requiring third-party validation due to legal constraints). The transcript emphasizes choosing benchmarks that are critical and hard to copy, since easily imitated benchmarks erase competitive advantage. For measuring knowledge management effectiveness beyond benchmarking, it outlines Quality Function Deployment (“House of Quality”) to link prioritized outcomes (knowledge creation, sharing, faster problem solving, cost/quality, customer satisfaction) to performance results, and Balanced Scorecard to track financial, customer, internal process, and learning/growth perspectives. Both approaches aim to connect knowledge initiatives to measurable business outcomes and intellectual capital value.

How is benchmarking different from simply comparing to an average performance level?

Benchmarking is framed as a standardized comparison against a chosen benchmark tied to goals and objectives. The transcript distinguishes average performance from benchmarking: benchmarking can mean competing against the average, but it can also mean moving to a higher level where standards are set above normal industry or internal levels. The point is to use benchmarks as a target for improvement, not just a reference point.

Why does the transcript warn that benchmarks must be difficult to copy or substitute?

If competitors can match the benchmark, the advantage disappears and the market becomes a “level playing field.” The ATM example shows how an early, rare capability created advantage until technology spread and other banks adopted similar systems. The same logic applies to knowledge management: benchmarks should focus on critical activities where rivals can’t easily imitate the underlying knowledge assets or processes.

What are the five stages of the benchmarking process described?

Stage 1: decide what to benchmark (process, product, or services) and define scope and rationale. Stage 2: determine who is involved—assemble a team, allocate resources, include relevant customers for feedback, and secure top management commitment and budget. Stage 3: set benchmark targets and partners, ensuring information availability within time and budget constraints. Stage 4: collect and analyze data, compare performance against competitors or targets, and assess whether the organization is ahead or behind. Stage 5: gather feedback and judge success; if unsuccessful, revisit targets/partners and repeat.

How does Quality Function Deployment (“House of Quality”) measure knowledge management effectiveness?

It starts by listing desired outcomes on one side—knowledge creation, knowledge sharing conversations, faster problem solving, improved quality and reduced costs, and customer satisfaction. Outcomes are prioritized by ranking which matters most. Performance results are then compared to targets, and the correlation between desired outcomes and achieved results indicates whether the knowledge management system is producing the intended effects (positive correlation is treated as a good sign; negative correlation as a warning).

What does Balanced Scorecard add to knowledge management measurement?

It uses four perspectives—financial, customer, internal business processes, and learning/growth—to translate knowledge management vision into objectives, metrics, targets, and initiatives. The transcript stresses that these perspectives are interrelated: learning and growth supports internal process improvements, which then drive customer outcomes and ultimately financial results. It also frames knowledge management measurement in terms of intellectual capital and market value, not only book value.

Review Questions

  1. What criteria should guide the choice of a benchmarking target so it creates competitive advantage rather than parity?
  2. In the five-stage benchmarking cycle, what triggers repeating earlier stages, and what changes when repeating?
  3. How do Quality Function Deployment and Balanced Scorecard differ in what they measure and how they connect knowledge initiatives to business outcomes?

Key Points

  1. 1

    Benchmarking should be tied to specific goals and objectives; comparing to averages is not the same as benchmarking for improvement.

  2. 2

    Choose benchmarks that are critical to the organization and hard for competitors to copy or substitute, otherwise the advantage fades.

  3. 3

    Benchmarking can be internal, industry-based, or competitor-based; competitor benchmarking may require third-party consultants to handle legal and practical issues.

  4. 4

    The benchmarking workflow follows five stages: select what to benchmark, form the team and secure support, set targets/partners, collect and analyze data, then use feedback to decide whether to repeat.

  5. 5

    Quality Function Deployment (“House of Quality”) measures knowledge management by linking prioritized outcomes (knowledge creation/sharing, problem-solving speed, cost/quality, customer satisfaction) to achieved performance.

  6. 6

    Balanced Scorecard measures knowledge management through four linked perspectives—financial, customer, internal processes, and learning/growth—so knowledge initiatives map to measurable outcomes.

  7. 7

    Knowledge management assessment should use a baseline, a limited set of controllable metrics, conservative claims, and appropriate rewards for participants.

Highlights

Benchmarking only creates real leverage when the standard is valuable and difficult to imitate; otherwise competitors catch up and the playing field levels.
The five-stage benchmarking cycle ends with feedback that can force a reset of targets and partners if results don’t match expectations.
Quality Function Deployment ranks knowledge outcomes (creation, sharing, faster problem solving, cost/quality, customer satisfaction) and checks whether achieved results correlate with desired outcomes.
Balanced Scorecard translates knowledge management strategy into measurable objectives across financial, customer, process, and learning perspectives, emphasizing the link to intellectual capital and market value.

Topics

  • Benchmarking Targets
  • Knowledge Management Metrics
  • Quality Function Deployment
  • Balanced Scorecard
  • Intellectual Capital

Mentioned

  • Netscape
  • Buckman Labs
  • Platinum Technology
  • Microsoft
  • 3M
  • Apple
  • Disney
  • Toyota
  • WalMart
  • McDonald
  • Motorola
  • General Electric
  • Airtran
  • Southwest Airlines
  • Apollo
  • Indigo
  • KPMG
  • Skandia
  • Kaplan
  • Norton
  • Hauser
  • Clausing
  • Robert
  • Coplanar
  • KM
  • IT
  • ATM
  • FSAB
  • GAAP