Get AI summaries of any video or article — Sign up free
Learning in public with Personal Knowledge Management thumbnail

Learning in public with Personal Knowledge Management

5 min read

Based on Nicole van der Hoeven's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat PKM like an observable system: make learning artifacts visible and findable so accountability and feedback can drive improvement.

Briefing

Personal knowledge management (PKM) becomes more effective when learning is treated like an observable system: make ideas public enough to create accountability, feedback, and visibility—then continuously refine them as new signals arrive. The core insight ties together two worlds: performance engineering, where reliability improves by “removing the lid” on a black box, and PKM, where understanding improves when notes and learning processes are made findable and shareable rather than kept private.

In the performance-engineering framing, teams often start with systems they can’t fully see—only fragments of the “story” are known. The first move isn’t necessarily to add more CPU or storage. It’s to make the system observable by instrumenting it: expose internal state, publish outputs, and set up ongoing monitoring. That visibility can change behavior on its own. A simple metric—like CPU usage—gives teams a number to optimize, and people often adjust their work without being explicitly told to. The same dynamic, the talk argues, can apply to learning: when people can see what someone is doing, they ask better questions, offer more interaction, and provide signals that improve the work.

From there, the talk lays out why “learning in public” matters for PKM. Accountability rises because commitments are harder to ignore when others can see progress. Feedback becomes more likely because publishing increases the chance of comments, critiques, and alternative perspectives. Visibility helps careers too: a public learning record functions like a portfolio, making it easier for people to find and trust someone’s expertise. Clarity improves as well—trying to explain a topic forces the learner to identify the “north star” of what matters, not every detail. Finally, public iteration is positioned as scalable: instead of waiting years to ship polished content, learning can be treated as ongoing, observable work.

The practical model is built as a four-step loop: make, instrument, monitor, and refactor. “Make the thing” means documenting consistently—especially through daily notes—starting from what already sparks interest rather than forcing a predetermined topic. “Ship it” can require pre-commitments: publicly stating a learning plan and timeline so the work actually reaches an audience. “Instrumenting” focuses on findability for both the self and others, using search, properties, links, tags, bookmarks, and visual graph views (the talk references Obsidian workflows and graph tools). External findability comes from publishing—such as using Obsidian Publish—and even lightweight formats like #TIL posts to expose small learning increments.

“Monitoring” shifts from one-time posting to continuous observation: listening for responses and building mechanisms that invite constructive criticism. The talk emphasizes cultivating “ritual dissent” (drawing on Edward Bono’s Six Thinking Hats) so communities become safe places for feedback that improves explanations and coverage. “Refactoring” then means iterating without changing the core purpose—recombining modular “Lego blocks” of notes, swallowing pride when feedback demands change, and treating notes as continuously evolving artifacts rather than finished documents. The result is a PKM system that learns in cycles, powered by public signals and ongoing refinement.

Cornell Notes

The talk argues that PKM improves when learning is treated like an observable system. Visibility creates accountability, increases feedback, and forces clarity about what actually matters. A four-step loop—make, instrument, monitor, refactor—turns “learning in public” into a repeatable practice: document daily notes, ship with pre-commitments, make notes findable (search, links, tags, graphs, and publishing), and then listen for responses. Feedback is strengthened by building a “ritual dissent” culture where people can critique kindly and constructively. Notes should be refactored continuously, remixing modular ideas as new understanding arrives rather than treating them as finished products.

How does “observability” in performance engineering translate into a PKM strategy?

In performance engineering, observability means removing the “lid” from a black box so teams can see internal state and behavior. The talk claims that simply exposing a metric can change behavior: once people have a number (e.g., CPU usage), they often optimize it without being told. In PKM, the parallel is making learning artifacts visible and findable—so others can see progress, ask questions, and provide signals that improve the system of notes.

Why does publishing learning increase accountability, feedback, and clarity?

Accountability rises because public commitments invite follow-up questions when progress stalls. Feedback becomes more likely because publishing creates more touchpoints for comments and critique; even one strong comment can add information that would otherwise be missed. Clarity improves because explaining a topic forces the learner to identify the “north star”—why it matters—rather than getting lost in every detail.

What does the “make, instrument, monitor, refactor” loop look like in practice?

Make: start documenting with daily notes and follow resonance (links, meeting notes, what you watch). Instrument: make notes findable for yourself (search, properties, links, tags, bookmarks, and visual graph views) and for others (publishing notes). Monitor: keep listening for responses and cultivate a feedback-friendly community. Refactor: update and remix notes based on what feedback reveals, treating notes as continuously evolving rather than finished documents.

How can someone “ship” learning without waiting for polished output?

The talk recommends pre-commitments: publicly stating an intended learning journey and timeline so the work reaches an audience. It also suggests lightweight publishing like #TIL posts (short updates such as a two-sentence learning note) to expose process without the burden of producing a full, polished artifact.

What does “monitoring” mean for a PKM system beyond posting once?

Monitoring means continuous observability: pulling in signals over time and turning them into improvements. The talk describes using social platforms to create rich interaction loops—highlighting others’ work, updating related notes, and linking back—so participation generates more participation. It also frames monitoring as making feedback understandable and actionable, often through visual representations (mirroring dashboard thinking in software).

Why is “ritual dissent” important, and how does it connect to Edward Bono’s Six Thinking Hats?

The talk argues that observability isn’t enough without constructive feedback. “Ritual dissent” creates a safe space for people to say what could be explained better or what was missing. It connects to Edward Bono’s Six Thinking Hats by treating feedback roles like hats: in the talk’s example, “black hat” corresponds to ritual dissent—challenging and scrutinizing work—while still encouraging kindness and compassion.

Review Questions

  1. What specific behaviors change when a learning process becomes observable, and how does that mirror observability in software systems?
  2. Which tools and practices help with “instrumenting” a PKM system for both self and external audiences, and why does visual graphing matter?
  3. How does the talk’s four-step loop prevent PKM from becoming a static archive of finished notes?

Key Points

  1. 1

    Treat PKM like an observable system: make learning artifacts visible and findable so accountability and feedback can drive improvement.

  2. 2

    Start with documentation (daily notes and resonance) rather than waiting for polished “content,” and follow what you’re already interested in.

  3. 3

    Use pre-commitments to ensure learning gets shipped to an audience on a timeline, not just kept as private intention.

  4. 4

    Instrument notes for retrieval using search, properties, links, tags, bookmarks, and visual graph views; publish externally to make progress externally observable.

  5. 5

    Monitor continuously by listening for responses and using community interaction to generate more signals over time.

  6. 6

    Build a culture of constructive critique (“ritual dissent”) so feedback is safe, kind, and specific—improving explanations and coverage.

  7. 7

    Refactor by remixing modular note “Lego blocks” and updating ideas as understanding evolves, treating notes as continuously improving rather than finished products.

Highlights

Observability can change behavior on its own: once teams have a metric to watch, they often optimize it without extra prompting—an effect the talk applies to learning artifacts.
A four-step loop—make, instrument, monitor, refactor—turns “learning in public” into a repeatable PKM workflow.
Publishing doesn’t have to be heavy: #TIL-style micro-updates can create a ripple effect and make learning feel safer for others.
Constructive criticism is framed as essential: “ritual dissent” creates a safe channel for feedback that improves what’s written and what’s missing.
Notes should behave like software: never truly “done,” always refactored as new feedback and understanding arrive.

Topics