Karissa Van Baulen - The Importance of Using Analytics and Feedback for your Documentation
Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start dashboard design with user-centered questions (likes/dislikes, missing information, layout intuition, and where users drop into support), not with graphs.
Briefing
Documentation and knowledge bases improve fastest when teams treat user analytics and in-page feedback as a single system—using numbers to find where users struggle, then using feedback to learn why. Karissa Van Baulen frames the core problem as a common trap: dashboards built around vanity metrics like page views or even benchmark scores can produce “a number” without actionable insight. The fix is to start with questions about user needs—what users like or dislike, where they drop off, which layouts feel intuitive, and whether content actually prevents support tickets—then design analytics and feedback around those questions.
A key theme is that documentation funnels behave differently from e-commerce. Instead of pushing users toward a purchase, teams aim for “negative conversion”: getting users to the right answer quickly so they don’t contact support. That shift changes how success metrics are interpreted. Van Baulen groups the work into four analytics/feedback categories: user feedback (to capture reasons), drop-off points (to locate where users fail), session visualization (to observe behavior), and self-service score (to measure whether users find answers without tickets).
For user feedback, she distinguishes between surveys and always-on feedback widgets. Surveys typically appear after a user indicates an article wasn’t helpful (e.g., clicking “no”), then ask what’s missing and can route users into follow-up via email. Feedback widgets live on the page and let users rate with emojis or stars and optionally leave comments. The practical payoff is catching small issues that reviews miss—like spacing problems in headings and paragraphs—because incoming feedback acts like an extra layer of editing.
To turn feedback into dashboard-ready metrics, she recommends an “approval rating” calculated as positive responses divided by total responses (positive and negative can come from widget ratings, surveys, or ticket-like “did you like this page?” buttons). Color thresholds help teams prioritize: high approval can be deprioritized, while low approval flags pages needing review. She pairs this with page view prioritization so the most urgent work sits at the intersection of high traffic and low approval.
Drop-off points extend the same logic along the support journey. Van Baulen describes a four-step funnel over a seven-day window: landing on a documentation page, reaching the contact form, filling out a ticket, and submitting it. When drop-off spikes, teams should investigate whether users are failing to get answers—and they should use feedback tools at those moments to learn the cause. She also introduces deflection rate (equated to drop-off rate in her setup) as a way to quantify how effectively content prevents tickets, which can be used to justify documentation investment to stakeholders.
Session visualization—via recordings and heat maps—adds the “watch and learn” layer. Instead of relying solely on analytics, teams can run weekly review sessions where they filter recordings to specific hypotheses, observe reading and interaction patterns, and generate a concrete backlog of improvements.
Finally, she addresses self-service score and why benchmarking alone can mislead. Self-service score is calculated as knowledge base sessions divided by tickets submitted, and it should be framed as speeding up the user experience, not reducing support. When benchmarks don’t clarify what to change, she recommends cohorting by source (e.g., overall vs. Google traffic vs. internal tool traffic) and by page or time-on-page to pinpoint where experiments should start. In Q&A, she adds practical guidance on handling negative feedback, filtering out cases driven by product expectations rather than documentation, and how lone writers can begin with a smaller set of tools and iterate.
Cornell Notes
Karissa Van Baulen argues that documentation teams should combine user analytics with in-page feedback to get both “where” and “why.” Approval rating turns positive/negative feedback into a dashboard metric (positive responses ÷ total responses), helping prioritize pages that are both heavily viewed and poorly received. Drop-off points and deflection rate identify where users abandon the path to contacting support, guiding where to deploy feedback and what to fix. Session recordings and heat maps let teams observe real reading and interaction patterns, feeding a weekly improvement loop. Self-service score measures whether users find answers without tickets, but benchmarking becomes useful only when teams cohort by source (e.g., Google vs. internal tool) and by specific sections/pages.
Why does “benchmarking” alone often fail to produce actionable documentation improvements?
How is approval rating calculated, and how does it help prioritize work?
What makes documentation funnels different from e-commerce funnels, and how does that change metrics?
What is deflection rate in this framework, and why does it matter to stakeholders?
How do session recordings and heat maps complement analytics?
How can self-service score become more actionable than a single benchmark number?
Review Questions
- If approval rating is low on a low-traffic page, what additional metric would you check to decide whether it should be prioritized?
- Describe the four-step documentation funnel Van Baulen uses and explain what “drop-off” means in this context.
- Why might two teams with the same self-service score still need very different documentation changes?
Key Points
- 1
Start dashboard design with user-centered questions (likes/dislikes, missing information, layout intuition, and where users drop into support), not with graphs.
- 2
Use in-page feedback (surveys triggered by “not helpful” and always-on widgets) to capture “why,” then convert responses into an approval rating metric.
- 3
Prioritize pages by combining approval rating thresholds with page view volume so the most urgent fixes align with both impact and dissatisfaction.
- 4
Treat documentation funnels as “negative conversion” and use drop-off points and deflection rate to measure how well content prevents tickets.
- 5
Add session visualization (recordings/heat maps) to observe reading and interaction patterns, then run a recurring watch-and-learn session to generate concrete edits.
- 6
Make self-service score actionable by cohorting by traffic source (e.g., Google vs. overall) and by specific pages/sections rather than relying on a single benchmark number.
- 7
Filter negative feedback carefully to separate documentation problems from product expectations, using tools and workflows that allow excluding irrelevant causes.