Rachel Rigdon - Quest for the Holy Grail: Turning User Feedback into Meaningful Change
Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
User feedback only creates meaningful change when it’s paired with evaluation, clear ownership, and a closed-loop process—not just a place to comment.
Briefing
User feedback becomes genuinely valuable only when it’s tied to a system for evaluation, ownership, and follow-through—Sailpoint’s documentation team spent years failing at that link, then rebuilt it around a community platform and a disciplined workflow. The payoff was measurable: over roughly 11 months, comments on 7,500 documentation pages generated nearly 700 comments and resulted in 208 Jira tickets, with many issues escalating beyond documentation into product and organizational fixes. The central lesson is that “collecting feedback” is the easy part; turning it into meaningful change requires maturity, subject-matter expertise, and a reliable loop that acknowledges users, takes action, and closes the loop.
Sailpoint’s early attempts illustrate why feedback programs often stall. Over about 6.5 years, the team tried community commenting on a platform, integrations with ServiceNow and customer-facing teams, incentive programs for support, and lower-tech options like forms, surveys, interviews, and a shared email inbox. Most approaches produced generic, unactionable input (“docs are confusing” without specifics), created ownership confusion when comments couldn’t be categorized, and suffered from weak notifications and triage—leaving users waiting. Cross-team partnerships also proved difficult because shared goals were not always aligned.
The successful program launched after the team concluded it needed organizational readiness rather than just tools. Sailpoint emphasized four pillars: (1) evaluation (including the nuance of whether a comment is about documentation, the product, or even third-party integrations), (2) acknowledgement, (3) action, and (4) closing the loop. That evaluation work turned out to be more complex than expected: feedback often arrived as questions about accuracy, validity, or how features worked, and writers needed enough subject-matter expertise to route and respond correctly.
The program’s mechanics were designed to prevent the earlier failure modes. Each published documentation page had a corresponding Discourse topic containing an excerpt of the content, so comments stayed tightly coupled to the exact text being discussed. Topics were automatically categorized and tagged by feature type to route notifications to the correct doc team and writer. When a user commented, targeted notifications alerted the right owner, who then created a Jira ticket directly from Discourse when the request met a high bar for action. If the issue wasn’t documentation, comments were redirected to the appropriate community category so other users—especially “ambassadors” from Sailpoint’s developer relations community—could help.
The results went beyond engagement metrics. About 19% of comments came from ambassadors, while another 19% came from users whose first community activity was posting feedback on docs—evidence that documentation can drive community participation. The team also reported compliance and product value: some feedback led to changes that reduced reliance on “workarounds,” and other comments surfaced product or organizational gaps that writers escalated to PMs and engineering. An internal evaluation after about eight months found that evaluation is an “art” requiring time and support, and that closing the loop is where documentation’s broader impact becomes visible.
In Q&A, Sailpoint described triage as an evaluation-driven process: writers assess whether the issue belongs in docs, the product, or another category, then pull in PMs and developers only when needed. They also stressed that community-first routing helps avoid turning writers into de facto support. The overarching guidance: prioritize maturity, build partnerships with shared goals, design for evaluation and routing, and keep the loop tight so users see that their feedback leads to real outcomes.
Cornell Notes
Sailpoint’s documentation team learned that user feedback only drives meaningful change when it’s paired with a workflow for evaluation, ownership, and follow-through. After years of collecting feedback through multiple channels that produced generic comments and unclear triage, the team launched a Discourse-based program tied directly to documentation pages. Each doc page generated a Discourse topic with an excerpt, and automated categorization/tagging routed notifications to the right writers. Writers evaluated comments (often nuanced questions about accuracy, validity, or product vs. integration issues), created Jira tickets when warranted, and redirected non-doc issues to the community first. The program produced hundreds of comments and over 200 tickets, and it also surfaced product and organizational gaps—showing documentation’s value beyond writing.
Why did Sailpoint’s earlier feedback efforts fail to produce actionable outcomes?
What changed when Sailpoint launched the successful program, and what were the “four pillars”?
How did the Discourse setup keep feedback tightly connected to specific documentation content?
How did Sailpoint route feedback to the right team and avoid the “everybody’s looking, nobody’s looking” problem?
What did Sailpoint do when feedback wasn’t actually about documentation?
What did the program’s metrics and outcomes suggest about documentation’s broader impact?
Review Questions
- What specific failure modes (e.g., ownership, notifications, feedback quality) did Sailpoint identify in earlier feedback collection attempts, and how did the Discourse workflow address them?
- Describe the evaluation step in Sailpoint’s process. What makes evaluation “an art,” and why does subject-matter expertise matter?
- How did Sailpoint prevent writers from turning into support staff while still ensuring users received help and that the loop was closed?
Key Points
- 1
User feedback only creates meaningful change when it’s paired with evaluation, clear ownership, and a closed-loop process—not just a place to comment.
- 2
Generic feedback and unclear routing were major failure modes in Sailpoint’s earlier attempts, driven by weak categorization, notifications, and triage.
- 3
Sailpoint’s successful workflow centered on four pillars: evaluation, acknowledgement, action, and closing the loop.
- 4
Tightly coupling Discourse topics to specific documentation excerpts reduced vague comments and made feedback context-specific.
- 5
Automated categorization and tagging ensured targeted notifications to the correct doc team and writer, solving “everybody’s looking, nobody’s looking.”
- 6
Redirect non-document issues to the community first to avoid turning documentation teams into de facto support; escalate to PMs/devs only when needed.
- 7
Program success depended on organizational maturity and subject-matter expertise to handle nuanced feedback, including product and third-party integration gaps.