Get AI summaries of any video or article — Sign up free
Lana Brindley - More than words: Reviewing and updating your information architecture thumbnail

Lana Brindley - More than words: Reviewing and updating your information architecture

Write the Docs·
6 min read

Based on Write the Docs's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Create a content map that inventories page titles, URLs, and content types so structural problems become visible at a glance.

Briefing

Apartment marketing language may sound “architecturally designed,” but the real lesson is about documentation: words and structure matter only if they’re designed for how people actually need to find and use information. Lana Brindley frames information architecture as the documentation equivalent of good building design—fixtures and furniture can be fine, yet the layout can still make the space unusable. The core finding is that teams should treat documentation structure as an intentional system: assess what exists, map content, understand readers’ goals, then implement changes with constraints and measure results.

The process starts with a hard look at current content by creating a content map. Brindley recommends capturing at least the top levels of the hierarchy—page titles, URLs, and content types—so patterns become visible when zooming out. In her example, one top-level bucket dominates, some sections are disproportionately large, and much content sits higher in the hierarchy than expected. The map also reveals content-type problems: using DITA-style categories (concept, task, reference) makes it easier to spot when prose explanations, step-by-step procedures, and lookup information are mixed together. A common failure mode is interwoven concepts and tasks, followed by more tasks, with little reference material—an arrangement that frustrates beginners who need concepts and slows down experienced readers who need to jump directly to tasks and reference.

Next comes reader research, using “readers” rather than “users” as the guiding lens. Documentation is read by people who may never operate the product directly—sales, support, community members evaluating participation, and others trying to decide whether to adopt or contribute. Brindley argues that the right question isn’t just what readers want, but why they need it—pushing from “how to choose a drill” to the deeper job-to-be-done (“how to install beds in unconventional places”). When time is limited, she still recommends contacting a small set of real readers (e.g., one-on-one interviews) and then using what’s learned to build a short survey for broader validation.

To prioritize what to write and fix, Brindley describes a lightweight user task analysis. Teams identify a few reader types (often beginner, intermediate, expert; or system administrator, sales, support) and list the major tasks each group tries to accomplish. The key is scoring whether each reader type will use the documentation to complete each task—not whether they do the task in general. The highest-scoring items become “critical paths,” signaling where documentation effort will have the biggest impact.

Finally, structure must match how people search and navigate. Hierarchies work for organizing large collections, but they fail when readers need to demonstrate or discover information. Brindley emphasizes multiple navigation paths (direct search, on-site search, menus, landing pages, and next-step behavior) and the need to guide readers from understanding to discovery—especially when products use internal terminology that outsiders don’t know.

Implementation follows research, but reality sets the pace. Teams should flatten overly deep hierarchies where needed, add intelligence such as keywords and related content, and build an implementation plan based on constraints (time, people, money) using a minimum viable product approach. Success must be measured from the start with baselines (e.g., dwell time) and monitored after changes. If redesigns miss the mark, documentation architecture stays iterative: gather feedback from actions and channels, adjust, and keep improving.

Cornell Notes

Documentation architecture should be treated like building design: the “materials” (good writing) don’t matter if the layout prevents people from finding and using information. Brindley’s workflow begins with a content map that inventories page titles, URLs, and DITA-style content types (concept, task, reference) to expose structural and content-type imbalances. Next, teams research the actual readers—often not the same as end users—and identify the problems they’re trying to solve, including the deeper “why” behind their needs. A user task analysis then prioritizes content using a scoring matrix to find “critical paths.” Finally, teams implement within constraints (time, people, money), start with a minimum viable product, and measure outcomes before iterating.

How does a content map help teams diagnose documentation problems faster than reading page-by-page?

A content map turns the hierarchy into something you can scan. Brindley recommends listing page titles, URLs, and content types at least for the top few levels, then zooming out to spot patterns: oversized top-level buckets, uneven second-level headings, and content sitting too high or too deep. In her example, most content clustered in one major bucket and one second-level area was much larger than the rest. The map also makes content-type mixing obvious—like having concepts interwoven with tasks, tasks followed by more tasks, and very little reference content.

Why use DITA-style content types (concept, task, reference) when reviewing information architecture?

Because it provides a simple diagnostic lens tied to reader needs. If content answers “what is it,” it’s a concept (usually explanatory prose). If it answers “how do I do it,” it’s a task (typically numbered steps). If it answers “what else do I need to know,” it’s reference (tables/lists for lookup, like command options). Brindley says good documentation sequences concepts → tasks → reference, and that mixed content becomes a red flag: beginners struggle when concepts are missing, while experienced readers struggle when tasks and reference are hard to locate.

What’s the difference between “users” and “readers,” and why does it change the documentation plan?

“Users” implies people who operate the product; “readers” includes anyone who reads documentation for a decision or evaluation. Brindley notes that sales and support staff may rely on docs more than end users, and in open source contexts people read docs to decide whether to join, use, or fork a project. That means reader research and task analysis must target the actual information-seekers, not only those who will run the software.

How does Brindley turn vague reader goals into actionable documentation requirements?

She pushes teams to ask the deeper “why.” Instead of stopping at “how to choose a drill,” she uses the drill example to show that the real need might be “install a bed over the staircase,” which changes what the documentation should contain. The same logic applies to docs: readers may know the outcome they want but not the internal terms used by the product, so navigation and content labeling must support discovery beyond exact keyword matches.

What does a user task analysis prioritize, and how is it scored?

It prioritizes content by estimating whether each reader type will use the documentation to complete each task. Brindley recommends defining a few reader types (e.g., beginner/intermediate/expert or system administrator/sales/support) and listing major tasks like installing, configuring, and troubleshooting. Then fill a matrix with likelihood estimates for each reader-task pair. She suggests scoring 3 (high), 2 (medium), 1 (low). The highest totals become “critical paths”—the tasks where documentation effort will matter most.

Why can a flattened hierarchy and better navigation outperform a traditional tree structure?

Hierarchies help organize collections, but they don’t always help people find what they need to demonstrate or complete a task. Brindley compares it to unpacking boxes after moving: putting items “somewhere reasonable” makes later retrieval hard. In docs, people may land on deep pages, use search, click menus, or follow related-content links. If navigation assumes readers already know the internal terminology, beginners can’t find the right material; flattening plus keyword/related-content intelligence can guide them from understanding to discovery.

Review Questions

  1. When reviewing an existing documentation set, what specific evidence from a content map would indicate that concept/task/reference are imbalanced?
  2. How would you design a scoring matrix for critical paths if your documentation is read by stakeholders who never operate the product?
  3. What baseline metrics would you choose to measure success before and after reorganizing navigation or adding site search?

Key Points

  1. 1

    Create a content map that inventories page titles, URLs, and content types so structural problems become visible at a glance.

  2. 2

    Use DITA-style categories (concept, task, reference) to detect when concepts, procedures, and lookup information are mixed in ways that block both beginners and experts.

  3. 3

    Treat “readers” as the target audience, not just “users,” because sales, support, and community evaluators often rely on documentation.

  4. 4

    Identify reader goals by asking the deeper “why,” then design content and navigation to support discovery when readers don’t know internal terminology.

  5. 5

    Prioritize work with a user task analysis matrix that scores whether each reader type will use docs for each task; focus on highest “critical path” scores.

  6. 6

    Implement changes within constraints using a minimum viable product approach, then measure outcomes with baselines (e.g., dwell time) before iterating.

  7. 7

    Expect redesigns to need adjustment; documentation architecture should be treated as an ongoing, data-driven process rather than a one-time rebuild.

Highlights

Apartment marketing language becomes a metaphor: good “fixtures” in documentation don’t help if the “layout” prevents people from using information.
A DITA-style content-type audit (concept/task/reference) quickly reveals a common failure: too many tasks, too few concepts and reference for lookup.
Reader research should include non-users—like sales, support, and community decision-makers—because they shape what documentation must accomplish.
Critical paths come from scoring whether each reader type will use docs to complete each task, not from assuming they will.
Navigation design matters as much as hierarchy depth; flattening plus keyword/related-content intelligence can help readers who don’t know internal terms.

Topics

Mentioned

  • Lana Brindley
  • DITA