Lana Brindley - More than words: Reviewing and updating your information architecture
Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Create a content map that inventories page titles, URLs, and content types so structural problems become visible at a glance.
Briefing
Apartment marketing language may sound “architecturally designed,” but the real lesson is about documentation: words and structure matter only if they’re designed for how people actually need to find and use information. Lana Brindley frames information architecture as the documentation equivalent of good building design—fixtures and furniture can be fine, yet the layout can still make the space unusable. The core finding is that teams should treat documentation structure as an intentional system: assess what exists, map content, understand readers’ goals, then implement changes with constraints and measure results.
The process starts with a hard look at current content by creating a content map. Brindley recommends capturing at least the top levels of the hierarchy—page titles, URLs, and content types—so patterns become visible when zooming out. In her example, one top-level bucket dominates, some sections are disproportionately large, and much content sits higher in the hierarchy than expected. The map also reveals content-type problems: using DITA-style categories (concept, task, reference) makes it easier to spot when prose explanations, step-by-step procedures, and lookup information are mixed together. A common failure mode is interwoven concepts and tasks, followed by more tasks, with little reference material—an arrangement that frustrates beginners who need concepts and slows down experienced readers who need to jump directly to tasks and reference.
Next comes reader research, using “readers” rather than “users” as the guiding lens. Documentation is read by people who may never operate the product directly—sales, support, community members evaluating participation, and others trying to decide whether to adopt or contribute. Brindley argues that the right question isn’t just what readers want, but why they need it—pushing from “how to choose a drill” to the deeper job-to-be-done (“how to install beds in unconventional places”). When time is limited, she still recommends contacting a small set of real readers (e.g., one-on-one interviews) and then using what’s learned to build a short survey for broader validation.
To prioritize what to write and fix, Brindley describes a lightweight user task analysis. Teams identify a few reader types (often beginner, intermediate, expert; or system administrator, sales, support) and list the major tasks each group tries to accomplish. The key is scoring whether each reader type will use the documentation to complete each task—not whether they do the task in general. The highest-scoring items become “critical paths,” signaling where documentation effort will have the biggest impact.
Finally, structure must match how people search and navigate. Hierarchies work for organizing large collections, but they fail when readers need to demonstrate or discover information. Brindley emphasizes multiple navigation paths (direct search, on-site search, menus, landing pages, and next-step behavior) and the need to guide readers from understanding to discovery—especially when products use internal terminology that outsiders don’t know.
Implementation follows research, but reality sets the pace. Teams should flatten overly deep hierarchies where needed, add intelligence such as keywords and related content, and build an implementation plan based on constraints (time, people, money) using a minimum viable product approach. Success must be measured from the start with baselines (e.g., dwell time) and monitored after changes. If redesigns miss the mark, documentation architecture stays iterative: gather feedback from actions and channels, adjust, and keep improving.
Cornell Notes
Documentation architecture should be treated like building design: the “materials” (good writing) don’t matter if the layout prevents people from finding and using information. Brindley’s workflow begins with a content map that inventories page titles, URLs, and DITA-style content types (concept, task, reference) to expose structural and content-type imbalances. Next, teams research the actual readers—often not the same as end users—and identify the problems they’re trying to solve, including the deeper “why” behind their needs. A user task analysis then prioritizes content using a scoring matrix to find “critical paths.” Finally, teams implement within constraints (time, people, money), start with a minimum viable product, and measure outcomes before iterating.
How does a content map help teams diagnose documentation problems faster than reading page-by-page?
Why use DITA-style content types (concept, task, reference) when reviewing information architecture?
What’s the difference between “users” and “readers,” and why does it change the documentation plan?
How does Brindley turn vague reader goals into actionable documentation requirements?
What does a user task analysis prioritize, and how is it scored?
Why can a flattened hierarchy and better navigation outperform a traditional tree structure?
Review Questions
- When reviewing an existing documentation set, what specific evidence from a content map would indicate that concept/task/reference are imbalanced?
- How would you design a scoring matrix for critical paths if your documentation is read by stakeholders who never operate the product?
- What baseline metrics would you choose to measure success before and after reorganizing navigation or adding site search?
Key Points
- 1
Create a content map that inventories page titles, URLs, and content types so structural problems become visible at a glance.
- 2
Use DITA-style categories (concept, task, reference) to detect when concepts, procedures, and lookup information are mixed in ways that block both beginners and experts.
- 3
Treat “readers” as the target audience, not just “users,” because sales, support, and community evaluators often rely on documentation.
- 4
Identify reader goals by asking the deeper “why,” then design content and navigation to support discovery when readers don’t know internal terminology.
- 5
Prioritize work with a user task analysis matrix that scores whether each reader type will use docs for each task; focus on highest “critical path” scores.
- 6
Implement changes within constraints using a minimum viable product approach, then measure outcomes with baselines (e.g., dwell time) before iterating.
- 7
Expect redesigns to need adjustment; documentation architecture should be treated as an ongoing, data-driven process rather than a one-time rebuild.