Get AI summaries of any video or article — Sign up free
Alex Garnett - Docs AI Tooling is Better (and Better for Us) than You Think thumbnail

Alex Garnett - Docs AI Tooling is Better (and Better for Us) than You Think

Write the Docs·
5 min read

Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use AI in documentation primarily through retrieval-augmented generation (RAG) so answers are grounded in your own corpus rather than model memory.

Briefing

Docs teams don’t need to choose between “AI everywhere” and “AI is poison.” Alex Garnett argues that the most practical, high-value use of AI in documentation is retrieval-augmented generation (RAG): connecting a model to a company’s own curated corpus so answers come from real sources, with citations and measurable uncertainty. That approach matters because it reduces hallucinations, preserves trust, and turns AI from a writing shortcut into a system for improving how users find and understand information.

Garnett starts by acknowledging the emotional squeeze around AI adoption—boosterism on one side, doomerism on the other—and the way it can wear down writers who are trying to do careful, source-driven work. He agrees AI is useful for small technical tasks (regexes, table conversions, query strings), but warns that many commercial systems still hallucinate, including citing nonexistent plugins. For documentation work, he says the key is protecting the parts of the job people genuinely value—often the intellectual work of context, synthesis, and helping readers connect ideas—while using AI to reduce the reactive, coverage-chasing burden.

From there, he pivots to RAG. Instead of relying on a model’s pretraining alone, RAG points it at a targeted set of materials—company docs, support content, community discussions, courses—so responses reflect the right context. Training at the scale of foundation models is expensive, but the “last step” of retrieval and grounding can be done by smaller vendors, which is why RAG is increasingly sold as an enterprise layer.

To make the case concrete, Garnett highlights Kappa, which he describes as embedding a “reliable LLM-powered chatbot” inside documentation. Users can ask questions via an “Ask AI” button, and the system answers by assembling sources from the documentation and other connected channels. Garnett emphasizes three operational principles that make this feel trustworthy: (1) the system flags uncertainty instead of pretending to know, (2) answers are citation-driven so users can trace where information came from, and (3) the chatbot is built to integrate multiple corpora.

He also frames docs AI as a feedback loop for editorial work. Kappa’s analytics—like the share of questions marked “uncertain,” plus breakdowns of what users ask—help teams identify gaps and create Jira tickets to update documentation. Garnett says this can even reveal accessibility issues in writing style, such as when docs imply compatibility (“you can use X with Y”) without explicitly stating what doesn’t work (“X is not supported with Y”), forcing readers (and the AI) to guess.

Finally, he argues that good docs AI should not require teams to strip out tables or simplify content for the model’s sake. Instead, the AI should move toward existing documentation best practices. He closes by stressing that adoption should be grounded in standards: insist on citations, evaluate accuracy, and support vendors whose systems align with professional credibility—because the broader ecosystem still includes mediocre or bad actors, especially around licensing and provenance.

Cornell Notes

Alex Garnett recommends using AI in documentation through retrieval-augmented generation (RAG), where a model is grounded in a company’s own corpus rather than answering from memory. This approach aims to reduce hallucinations and increase trust by pairing responses with valid citations and explicit uncertainty flags. He argues that the most valuable docs work—context, synthesis, and helping readers connect ideas—should be protected, while AI helps with the reactive parts of documentation. Using Kappa as an example, he describes how analytics (e.g., questions marked uncertain) can drive concrete editorial updates via Jira tickets. The result is a measurable feedback loop that improves docs accessibility and completeness without forcing teams to abandon their existing formatting and best practices.

Why does Garnett treat “writing the docs with AI” as less important than other uses?

He doesn’t deny that AI can draft or rephrase content, but he frames that use case as close to style linting or copy-editing—helpful, yet not the most consequential shift for documentation teams. The bigger opportunity is protecting the work people value most (context, synthesis, and helping readers connect knowledge) while using AI to reduce the reactive, coverage-chasing workload that drains writers. In his view, the most meaningful gains come when AI helps users find and understand information grounded in the right sources.

What is retrieval-augmented generation (RAG), and why does it matter for docs?

RAG takes a pretrained model and “points” it at a targeted corpus so answers are informed by specific materials—company docs, support content, community threads, courses, and more. Foundation model training is expensive, but retrieval and grounding can be added as a final step by vendors without massive in-house AI infrastructure. For documentation, RAG matters because it can replace vague model recall with source-based answers, lowering the risk of hallucinations and improving trust.

What makes Kappa’s approach feel more reliable than generic chatbots?

Garnett highlights three reliability features: (1) uncertainty is flagged—users can see when the system isn’t sure, (2) responses are citation-driven so users can trace answers back to sources, and (3) the system integrates multiple corpora (docs plus community and other internal learning materials). He also notes that Kappa’s analytics show how often answers are uncertain, enabling teams to evaluate performance rather than relying on vibes.

How does AI analytics translate into actual documentation improvements?

He describes a workflow where the team reviews analytics—especially questions marked “uncertain”—to decide what to fix. Sometimes they adjust the sources used by the chatbot (e.g., ensuring course content is used effectively). The team then makes documentation edits and creates Jira tickets based on missing or incorrect information discovered through real user questions and chatbot conversations.

What accessibility lesson does Garnett draw from how docs are written for AI?

He argues that writing by implication can fail both readers and AI. If docs say “you can use X with Y” without explicitly stating “X is not supported with Y,” users will try unsupported combinations and the chatbot may not know how to respond. Garnett treats this as an accessibility issue: clearer, explicit compatibility boundaries help reduce guesswork and improve user outcomes.

Does Garnett think docs must be reformatted or simplified for AI to work?

No. He criticizes “cargo cult” advice to remove tables or alter documentation solely so AI performs better. Instead, he insists that the AI should adapt to existing documentation best practices. In his view, citation-driven RAG can preserve rich formatting while still grounding answers in authoritative sources.

Review Questions

  1. What specific features (beyond “it answers questions”) does Garnett say are necessary for docs AI to be trustworthy?
  2. How would you design a feedback loop using uncertainty metrics to prioritize documentation updates?
  3. Why does Garnett argue that context and synthesis matter more than copy-editing when evaluating docs AI?

Key Points

  1. 1

    Use AI in documentation primarily through retrieval-augmented generation (RAG) so answers are grounded in your own corpus rather than model memory.

  2. 2

    Treat uncertainty as a feature: systems should flag when they’re not confident, and teams should use that signal to improve docs.

  3. 3

    Require valid citations and traceability so users can verify answers and follow links back to authoritative sources.

  4. 4

    Protect the highest-value docs work—context, synthesis, and helping readers connect ideas—while using AI to reduce reactive coverage work.

  5. 5

    Integrate multiple corpora (docs, community discussions, courses, support-adjacent content) to expand the “routes” users can take to reach knowledge.

  6. 6

    Turn chatbot analytics into an editorial workflow (e.g., Jira tickets) so AI becomes a feedback loop, not a layer that silently guesses.

  7. 7

    Don’t degrade documentation best practices (like tables) for AI convenience; make the AI adapt to the docs, not the other way around.

Highlights

RAG is positioned as the docs-friendly alternative to generic chat: it grounds answers in targeted sources and supports citation-driven trust.
Kappa’s “uncertain” flag is treated as an operational lever—teams can measure it and use it to decide what to fix in documentation.
The most important docs AI outcome isn’t faster writing; it’s better context and more accessible pathways to authoritative information.
Analytics from real questions can be converted into concrete Jira tickets, creating a measurable improvement loop for docs quality.

Topics

Mentioned

  • Temporal
  • Kappa
  • DocuSaurus
  • Docosaurus
  • Alex Garnett
  • Jody
  • William Butler Yates
  • Emil
  • RAG
  • LMS
  • SEO
  • CLI