Get AI summaries of any video or article — Sign up free
Karissa Van Baulen - The Importance of Using Analytics and Feedback for your Documentation thumbnail

Karissa Van Baulen - The Importance of Using Analytics and Feedback for your Documentation

Write the Docs·
6 min read

Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start dashboard design with user-centered questions (likes/dislikes, missing information, layout intuition, and where users drop into support), not with graphs.

Briefing

Documentation and knowledge bases improve fastest when teams treat user analytics and in-page feedback as a single system—using numbers to find where users struggle, then using feedback to learn why. Karissa Van Baulen frames the core problem as a common trap: dashboards built around vanity metrics like page views or even benchmark scores can produce “a number” without actionable insight. The fix is to start with questions about user needs—what users like or dislike, where they drop off, which layouts feel intuitive, and whether content actually prevents support tickets—then design analytics and feedback around those questions.

A key theme is that documentation funnels behave differently from e-commerce. Instead of pushing users toward a purchase, teams aim for “negative conversion”: getting users to the right answer quickly so they don’t contact support. That shift changes how success metrics are interpreted. Van Baulen groups the work into four analytics/feedback categories: user feedback (to capture reasons), drop-off points (to locate where users fail), session visualization (to observe behavior), and self-service score (to measure whether users find answers without tickets).

For user feedback, she distinguishes between surveys and always-on feedback widgets. Surveys typically appear after a user indicates an article wasn’t helpful (e.g., clicking “no”), then ask what’s missing and can route users into follow-up via email. Feedback widgets live on the page and let users rate with emojis or stars and optionally leave comments. The practical payoff is catching small issues that reviews miss—like spacing problems in headings and paragraphs—because incoming feedback acts like an extra layer of editing.

To turn feedback into dashboard-ready metrics, she recommends an “approval rating” calculated as positive responses divided by total responses (positive and negative can come from widget ratings, surveys, or ticket-like “did you like this page?” buttons). Color thresholds help teams prioritize: high approval can be deprioritized, while low approval flags pages needing review. She pairs this with page view prioritization so the most urgent work sits at the intersection of high traffic and low approval.

Drop-off points extend the same logic along the support journey. Van Baulen describes a four-step funnel over a seven-day window: landing on a documentation page, reaching the contact form, filling out a ticket, and submitting it. When drop-off spikes, teams should investigate whether users are failing to get answers—and they should use feedback tools at those moments to learn the cause. She also introduces deflection rate (equated to drop-off rate in her setup) as a way to quantify how effectively content prevents tickets, which can be used to justify documentation investment to stakeholders.

Session visualization—via recordings and heat maps—adds the “watch and learn” layer. Instead of relying solely on analytics, teams can run weekly review sessions where they filter recordings to specific hypotheses, observe reading and interaction patterns, and generate a concrete backlog of improvements.

Finally, she addresses self-service score and why benchmarking alone can mislead. Self-service score is calculated as knowledge base sessions divided by tickets submitted, and it should be framed as speeding up the user experience, not reducing support. When benchmarks don’t clarify what to change, she recommends cohorting by source (e.g., overall vs. Google traffic vs. internal tool traffic) and by page or time-on-page to pinpoint where experiments should start. In Q&A, she adds practical guidance on handling negative feedback, filtering out cases driven by product expectations rather than documentation, and how lone writers can begin with a smaller set of tools and iterate.

Cornell Notes

Karissa Van Baulen argues that documentation teams should combine user analytics with in-page feedback to get both “where” and “why.” Approval rating turns positive/negative feedback into a dashboard metric (positive responses ÷ total responses), helping prioritize pages that are both heavily viewed and poorly received. Drop-off points and deflection rate identify where users abandon the path to contacting support, guiding where to deploy feedback and what to fix. Session recordings and heat maps let teams observe real reading and interaction patterns, feeding a weekly improvement loop. Self-service score measures whether users find answers without tickets, but benchmarking becomes useful only when teams cohort by source (e.g., Google vs. internal tool) and by specific sections/pages.

Why does “benchmarking” alone often fail to produce actionable documentation improvements?

Van Baulen says benchmark scores like self-service score can become “just a number” if teams don’t learn the underlying reasons. A high or low benchmark doesn’t automatically indicate what to change in layout, content, or navigation. The remedy is to pair benchmarks with user feedback (to capture reasons) and with behavioral analytics like drop-off points and session recordings (to locate where users struggle).

How is approval rating calculated, and how does it help prioritize work?

Approval rating is computed as total positive responses on a page divided by total responses (positive ÷ total). Positive/negative responses can come from feedback widgets, surveys, or “did you like this page?” buttons. Van Baulen color-codes thresholds (e.g., green for strong approval; yellow/red for pages needing attention) and then uses page views to decide which pages to tackle first—especially when high traffic overlaps with low approval.

What makes documentation funnels different from e-commerce funnels, and how does that change metrics?

Documentation funnels aim for “negative conversion”: users should reach the right answer and avoid contacting support. Instead of maximizing conversion to a purchase, teams track drop-off into support actions (contact form → ticket → submission). Higher drop-off into support is treated as a problem, and deflection rate quantifies how effectively content prevents tickets.

What is deflection rate in this framework, and why does it matter to stakeholders?

Deflection rate is tied to drop-off rate in Van Baulen’s setup and is calculated using page views of the target page (after the drop-off) divided by page views of the referring page (before the drop-off), then expressed as a conversion-like percentage. It matters because it translates documentation performance into support workload reduction. She uses it to estimate monthly savings (e.g., fewer tickets) to secure buy-in and funding for improvements.

How do session recordings and heat maps complement analytics?

Recordings and heat maps show actual user behavior—where users read, how their mouse movement tracks reading, and which elements they interact with. Van Baulen recommends a practical “watch and learn” routine: weekly sessions where the team filters recordings to two topics, observes sessions naturally (without scripted tasks), and produces a backlog of specific page changes.

How can self-service score become more actionable than a single benchmark number?

Van Baulen recommends cohorting. Instead of only using an overall self-service score, teams compare self-service for different sources (e.g., Google traffic vs. knowledge base-wide vs. internal tool traffic). She also suggests drilling down by page/section and even by user behavior like time on an article. This turns “we’re above/below industry” into “we should run experiments for this source or section.”

Review Questions

  1. If approval rating is low on a low-traffic page, what additional metric would you check to decide whether it should be prioritized?
  2. Describe the four-step documentation funnel Van Baulen uses and explain what “drop-off” means in this context.
  3. Why might two teams with the same self-service score still need very different documentation changes?

Key Points

  1. 1

    Start dashboard design with user-centered questions (likes/dislikes, missing information, layout intuition, and where users drop into support), not with graphs.

  2. 2

    Use in-page feedback (surveys triggered by “not helpful” and always-on widgets) to capture “why,” then convert responses into an approval rating metric.

  3. 3

    Prioritize pages by combining approval rating thresholds with page view volume so the most urgent fixes align with both impact and dissatisfaction.

  4. 4

    Treat documentation funnels as “negative conversion” and use drop-off points and deflection rate to measure how well content prevents tickets.

  5. 5

    Add session visualization (recordings/heat maps) to observe reading and interaction patterns, then run a recurring watch-and-learn session to generate concrete edits.

  6. 6

    Make self-service score actionable by cohorting by traffic source (e.g., Google vs. overall) and by specific pages/sections rather than relying on a single benchmark number.

  7. 7

    Filter negative feedback carefully to separate documentation problems from product expectations, using tools and workflows that allow excluding irrelevant causes.

Highlights

Approval rating turns scattered feedback into a clear metric: positive responses divided by total responses, then color-coded to drive triage.
Drop-off points in a documentation funnel are measured against the path to support (contact form → ticket → submission), aligning analytics with the goal of preventing tickets.
Weekly “watch and learn” sessions use recordings to turn hypotheses into a backlog of page-level improvements without heavy research overhead.
Self-service score should be framed as speeding up user success—not reducing support—and becomes useful when broken into cohorts like Google-origin sessions.

Topics

  • User Feedback
  • Approval Rating
  • Drop-Off Points
  • Deflection Rate
  • Self-Service Score
  • Session Recordings
  • Documentation Analytics

Mentioned

  • Karissa Van Baulen