Alex Garnett - Docs AI Tooling is Better (and Better for Us) than You Think
Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use AI in documentation primarily through retrieval-augmented generation (RAG) so answers are grounded in your own corpus rather than model memory.
Briefing
Docs teams don’t need to choose between “AI everywhere” and “AI is poison.” Alex Garnett argues that the most practical, high-value use of AI in documentation is retrieval-augmented generation (RAG): connecting a model to a company’s own curated corpus so answers come from real sources, with citations and measurable uncertainty. That approach matters because it reduces hallucinations, preserves trust, and turns AI from a writing shortcut into a system for improving how users find and understand information.
Garnett starts by acknowledging the emotional squeeze around AI adoption—boosterism on one side, doomerism on the other—and the way it can wear down writers who are trying to do careful, source-driven work. He agrees AI is useful for small technical tasks (regexes, table conversions, query strings), but warns that many commercial systems still hallucinate, including citing nonexistent plugins. For documentation work, he says the key is protecting the parts of the job people genuinely value—often the intellectual work of context, synthesis, and helping readers connect ideas—while using AI to reduce the reactive, coverage-chasing burden.
From there, he pivots to RAG. Instead of relying on a model’s pretraining alone, RAG points it at a targeted set of materials—company docs, support content, community discussions, courses—so responses reflect the right context. Training at the scale of foundation models is expensive, but the “last step” of retrieval and grounding can be done by smaller vendors, which is why RAG is increasingly sold as an enterprise layer.
To make the case concrete, Garnett highlights Kappa, which he describes as embedding a “reliable LLM-powered chatbot” inside documentation. Users can ask questions via an “Ask AI” button, and the system answers by assembling sources from the documentation and other connected channels. Garnett emphasizes three operational principles that make this feel trustworthy: (1) the system flags uncertainty instead of pretending to know, (2) answers are citation-driven so users can trace where information came from, and (3) the chatbot is built to integrate multiple corpora.
He also frames docs AI as a feedback loop for editorial work. Kappa’s analytics—like the share of questions marked “uncertain,” plus breakdowns of what users ask—help teams identify gaps and create Jira tickets to update documentation. Garnett says this can even reveal accessibility issues in writing style, such as when docs imply compatibility (“you can use X with Y”) without explicitly stating what doesn’t work (“X is not supported with Y”), forcing readers (and the AI) to guess.
Finally, he argues that good docs AI should not require teams to strip out tables or simplify content for the model’s sake. Instead, the AI should move toward existing documentation best practices. He closes by stressing that adoption should be grounded in standards: insist on citations, evaluate accuracy, and support vendors whose systems align with professional credibility—because the broader ecosystem still includes mediocre or bad actors, especially around licensing and provenance.
Cornell Notes
Alex Garnett recommends using AI in documentation through retrieval-augmented generation (RAG), where a model is grounded in a company’s own corpus rather than answering from memory. This approach aims to reduce hallucinations and increase trust by pairing responses with valid citations and explicit uncertainty flags. He argues that the most valuable docs work—context, synthesis, and helping readers connect ideas—should be protected, while AI helps with the reactive parts of documentation. Using Kappa as an example, he describes how analytics (e.g., questions marked uncertain) can drive concrete editorial updates via Jira tickets. The result is a measurable feedback loop that improves docs accessibility and completeness without forcing teams to abandon their existing formatting and best practices.
Why does Garnett treat “writing the docs with AI” as less important than other uses?
What is retrieval-augmented generation (RAG), and why does it matter for docs?
What makes Kappa’s approach feel more reliable than generic chatbots?
How does AI analytics translate into actual documentation improvements?
What accessibility lesson does Garnett draw from how docs are written for AI?
Does Garnett think docs must be reformatted or simplified for AI to work?
Review Questions
- What specific features (beyond “it answers questions”) does Garnett say are necessary for docs AI to be trustworthy?
- How would you design a feedback loop using uncertainty metrics to prioritize documentation updates?
- Why does Garnett argue that context and synthesis matter more than copy-editing when evaluating docs AI?
Key Points
- 1
Use AI in documentation primarily through retrieval-augmented generation (RAG) so answers are grounded in your own corpus rather than model memory.
- 2
Treat uncertainty as a feature: systems should flag when they’re not confident, and teams should use that signal to improve docs.
- 3
Require valid citations and traceability so users can verify answers and follow links back to authoritative sources.
- 4
Protect the highest-value docs work—context, synthesis, and helping readers connect ideas—while using AI to reduce reactive coverage work.
- 5
Integrate multiple corpora (docs, community discussions, courses, support-adjacent content) to expand the “routes” users can take to reach knowledge.
- 6
Turn chatbot analytics into an editorial workflow (e.g., Jira tickets) so AI becomes a feedback loop, not a layer that silently guesses.
- 7
Don’t degrade documentation best practices (like tables) for AI convenience; make the AI adapt to the docs, not the other way around.