Get AI summaries of any video or article — Sign up free
Rachel Rigdon - Quest for the Holy Grail: Turning User Feedback into Meaningful Change thumbnail

Rachel Rigdon - Quest for the Holy Grail: Turning User Feedback into Meaningful Change

Write the Docs·
5 min read

Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

User feedback only creates meaningful change when it’s paired with evaluation, clear ownership, and a closed-loop process—not just a place to comment.

Briefing

User feedback becomes genuinely valuable only when it’s tied to a system for evaluation, ownership, and follow-through—Sailpoint’s documentation team spent years failing at that link, then rebuilt it around a community platform and a disciplined workflow. The payoff was measurable: over roughly 11 months, comments on 7,500 documentation pages generated nearly 700 comments and resulted in 208 Jira tickets, with many issues escalating beyond documentation into product and organizational fixes. The central lesson is that “collecting feedback” is the easy part; turning it into meaningful change requires maturity, subject-matter expertise, and a reliable loop that acknowledges users, takes action, and closes the loop.

Sailpoint’s early attempts illustrate why feedback programs often stall. Over about 6.5 years, the team tried community commenting on a platform, integrations with ServiceNow and customer-facing teams, incentive programs for support, and lower-tech options like forms, surveys, interviews, and a shared email inbox. Most approaches produced generic, unactionable input (“docs are confusing” without specifics), created ownership confusion when comments couldn’t be categorized, and suffered from weak notifications and triage—leaving users waiting. Cross-team partnerships also proved difficult because shared goals were not always aligned.

The successful program launched after the team concluded it needed organizational readiness rather than just tools. Sailpoint emphasized four pillars: (1) evaluation (including the nuance of whether a comment is about documentation, the product, or even third-party integrations), (2) acknowledgement, (3) action, and (4) closing the loop. That evaluation work turned out to be more complex than expected: feedback often arrived as questions about accuracy, validity, or how features worked, and writers needed enough subject-matter expertise to route and respond correctly.

The program’s mechanics were designed to prevent the earlier failure modes. Each published documentation page had a corresponding Discourse topic containing an excerpt of the content, so comments stayed tightly coupled to the exact text being discussed. Topics were automatically categorized and tagged by feature type to route notifications to the correct doc team and writer. When a user commented, targeted notifications alerted the right owner, who then created a Jira ticket directly from Discourse when the request met a high bar for action. If the issue wasn’t documentation, comments were redirected to the appropriate community category so other users—especially “ambassadors” from Sailpoint’s developer relations community—could help.

The results went beyond engagement metrics. About 19% of comments came from ambassadors, while another 19% came from users whose first community activity was posting feedback on docs—evidence that documentation can drive community participation. The team also reported compliance and product value: some feedback led to changes that reduced reliance on “workarounds,” and other comments surfaced product or organizational gaps that writers escalated to PMs and engineering. An internal evaluation after about eight months found that evaluation is an “art” requiring time and support, and that closing the loop is where documentation’s broader impact becomes visible.

In Q&A, Sailpoint described triage as an evaluation-driven process: writers assess whether the issue belongs in docs, the product, or another category, then pull in PMs and developers only when needed. They also stressed that community-first routing helps avoid turning writers into de facto support. The overarching guidance: prioritize maturity, build partnerships with shared goals, design for evaluation and routing, and keep the loop tight so users see that their feedback leads to real outcomes.

Cornell Notes

Sailpoint’s documentation team learned that user feedback only drives meaningful change when it’s paired with a workflow for evaluation, ownership, and follow-through. After years of collecting feedback through multiple channels that produced generic comments and unclear triage, the team launched a Discourse-based program tied directly to documentation pages. Each doc page generated a Discourse topic with an excerpt, and automated categorization/tagging routed notifications to the right writers. Writers evaluated comments (often nuanced questions about accuracy, validity, or product vs. integration issues), created Jira tickets when warranted, and redirected non-doc issues to the community first. The program produced hundreds of comments and over 200 tickets, and it also surfaced product and organizational gaps—showing documentation’s value beyond writing.

Why did Sailpoint’s earlier feedback efforts fail to produce actionable outcomes?

Most methods generated generic, unactionable feedback (e.g., “docs are confusing” without specifying what’s confusing or what to do next). Ownership was also unclear when comments couldn’t be tagged or categorized, making it hard to route issues to the right writer with the right subject-matter expertise. Several implementations suffered from poor notifications and triage, so comments could sit for too long. Cross-team integrations with customer-facing teams were difficult because goals weren’t always aligned, and both sides had competing priorities.

What changed when Sailpoint launched the successful program, and what were the “four pillars”?

The team concluded it needed organizational readiness and maturity, not just a new channel. The workflow was built around four pillars: evaluation (including deciding whether feedback is about docs, the product, or third-party integrations), acknowledgement (making it easy to tell users they were heard), action (taking the appropriate intervention such as updating docs or redirecting), and closing the loop (communicating outcomes). Evaluation was treated as a supported, time-consuming craft rather than a quick step.

How did the Discourse setup keep feedback tightly connected to specific documentation content?

For each published doc page, Sailpoint created a corresponding Discourse topic containing an excerpt of the content. Users could access the discussion only through a doc link or direct URL, which prevented flooding the community with unrelated topics and reduced vague feedback. Comments appeared right where the content existed, keeping the feedback context-specific.

How did Sailpoint route feedback to the right team and avoid the “everybody’s looking, nobody’s looking” problem?

Discourse topics were automatically categorized and tagged by feature type. That tagging determined which doc team and writer received notifications—for example, different writers owned different feature areas (such as search vs. configuration hubs). This automation reduced manual triage and ensured the right subject-matter expertise handled each comment.

What did Sailpoint do when feedback wasn’t actually about documentation?

Writers evaluated whether the comment belonged in docs. If it didn’t, they redirected users to the appropriate community category so other users—especially ambassadors—could help. Writers aimed to avoid becoming a proxy for support. If community help didn’t resolve the issue, the team could then determine whether a support ticket was needed.

What did the program’s metrics and outcomes suggest about documentation’s broader impact?

Over about 11 months, comments on 7,500 doc pages produced nearly 700 comments and resulted in 208 Jira tickets (not every comment became a ticket due to a high evaluation bar). The team also saw that ambassadors contributed about 19% of comments, while another 19% came from users whose first community engagement was triggered by the doc page—suggesting docs can drive engagement into the community. Some feedback led to product or organizational changes, including escalations to PMs and engineering.

Review Questions

  1. What specific failure modes (e.g., ownership, notifications, feedback quality) did Sailpoint identify in earlier feedback collection attempts, and how did the Discourse workflow address them?
  2. Describe the evaluation step in Sailpoint’s process. What makes evaluation “an art,” and why does subject-matter expertise matter?
  3. How did Sailpoint prevent writers from turning into support staff while still ensuring users received help and that the loop was closed?

Key Points

  1. 1

    User feedback only creates meaningful change when it’s paired with evaluation, clear ownership, and a closed-loop process—not just a place to comment.

  2. 2

    Generic feedback and unclear routing were major failure modes in Sailpoint’s earlier attempts, driven by weak categorization, notifications, and triage.

  3. 3

    Sailpoint’s successful workflow centered on four pillars: evaluation, acknowledgement, action, and closing the loop.

  4. 4

    Tightly coupling Discourse topics to specific documentation excerpts reduced vague comments and made feedback context-specific.

  5. 5

    Automated categorization and tagging ensured targeted notifications to the correct doc team and writer, solving “everybody’s looking, nobody’s looking.”

  6. 6

    Redirect non-document issues to the community first to avoid turning documentation teams into de facto support; escalate to PMs/devs only when needed.

  7. 7

    Program success depended on organizational maturity and subject-matter expertise to handle nuanced feedback, including product and third-party integration gaps.

Highlights

Sailpoint’s biggest lesson wasn’t how to collect feedback—it was how to evaluate it well enough to decide whether it belongs in docs, the product, or elsewhere, then act and close the loop.
Each documentation page generated a Discourse topic with an excerpt, keeping comments anchored to the exact text and preventing generic, unactionable feedback.
Automated categorization/tagging routed notifications to the right writer by feature type, eliminating ownership ambiguity.
Over 7,500 doc pages, nearly 700 comments produced 208 Jira tickets—showing that disciplined evaluation can turn discussion into concrete work.
Feedback didn’t stay inside documentation: some comments surfaced product and organizational gaps that were escalated to PMs and engineering for fixes.

Topics

Mentioned

  • Rachel Rigdon
  • Jordan Violet
  • Derek Putnham
  • Henrik
  • Joshua
  • Angelo
  • Graham
  • Yan G
  • Garrett
  • Eric
  • Rebecca
  • Stacy
  • Bab
  • Conway
  • SLA
  • PM
  • devs
  • Jira