Learning in public with Personal Knowledge Management
Based on Nicole van der Hoeven's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat PKM like an observable system: make learning artifacts visible and findable so accountability and feedback can drive improvement.
Briefing
Personal knowledge management (PKM) becomes more effective when learning is treated like an observable system: make ideas public enough to create accountability, feedback, and visibility—then continuously refine them as new signals arrive. The core insight ties together two worlds: performance engineering, where reliability improves by “removing the lid” on a black box, and PKM, where understanding improves when notes and learning processes are made findable and shareable rather than kept private.
In the performance-engineering framing, teams often start with systems they can’t fully see—only fragments of the “story” are known. The first move isn’t necessarily to add more CPU or storage. It’s to make the system observable by instrumenting it: expose internal state, publish outputs, and set up ongoing monitoring. That visibility can change behavior on its own. A simple metric—like CPU usage—gives teams a number to optimize, and people often adjust their work without being explicitly told to. The same dynamic, the talk argues, can apply to learning: when people can see what someone is doing, they ask better questions, offer more interaction, and provide signals that improve the work.
From there, the talk lays out why “learning in public” matters for PKM. Accountability rises because commitments are harder to ignore when others can see progress. Feedback becomes more likely because publishing increases the chance of comments, critiques, and alternative perspectives. Visibility helps careers too: a public learning record functions like a portfolio, making it easier for people to find and trust someone’s expertise. Clarity improves as well—trying to explain a topic forces the learner to identify the “north star” of what matters, not every detail. Finally, public iteration is positioned as scalable: instead of waiting years to ship polished content, learning can be treated as ongoing, observable work.
The practical model is built as a four-step loop: make, instrument, monitor, and refactor. “Make the thing” means documenting consistently—especially through daily notes—starting from what already sparks interest rather than forcing a predetermined topic. “Ship it” can require pre-commitments: publicly stating a learning plan and timeline so the work actually reaches an audience. “Instrumenting” focuses on findability for both the self and others, using search, properties, links, tags, bookmarks, and visual graph views (the talk references Obsidian workflows and graph tools). External findability comes from publishing—such as using Obsidian Publish—and even lightweight formats like #TIL posts to expose small learning increments.
“Monitoring” shifts from one-time posting to continuous observation: listening for responses and building mechanisms that invite constructive criticism. The talk emphasizes cultivating “ritual dissent” (drawing on Edward Bono’s Six Thinking Hats) so communities become safe places for feedback that improves explanations and coverage. “Refactoring” then means iterating without changing the core purpose—recombining modular “Lego blocks” of notes, swallowing pride when feedback demands change, and treating notes as continuously evolving artifacts rather than finished documents. The result is a PKM system that learns in cycles, powered by public signals and ongoing refinement.
Cornell Notes
The talk argues that PKM improves when learning is treated like an observable system. Visibility creates accountability, increases feedback, and forces clarity about what actually matters. A four-step loop—make, instrument, monitor, refactor—turns “learning in public” into a repeatable practice: document daily notes, ship with pre-commitments, make notes findable (search, links, tags, graphs, and publishing), and then listen for responses. Feedback is strengthened by building a “ritual dissent” culture where people can critique kindly and constructively. Notes should be refactored continuously, remixing modular ideas as new understanding arrives rather than treating them as finished products.
How does “observability” in performance engineering translate into a PKM strategy?
Why does publishing learning increase accountability, feedback, and clarity?
What does the “make, instrument, monitor, refactor” loop look like in practice?
How can someone “ship” learning without waiting for polished output?
What does “monitoring” mean for a PKM system beyond posting once?
Why is “ritual dissent” important, and how does it connect to Edward Bono’s Six Thinking Hats?
Review Questions
- What specific behaviors change when a learning process becomes observable, and how does that mirror observability in software systems?
- Which tools and practices help with “instrumenting” a PKM system for both self and external audiences, and why does visual graphing matter?
- How does the talk’s four-step loop prevent PKM from becoming a static archive of finished notes?
Key Points
- 1
Treat PKM like an observable system: make learning artifacts visible and findable so accountability and feedback can drive improvement.
- 2
Start with documentation (daily notes and resonance) rather than waiting for polished “content,” and follow what you’re already interested in.
- 3
Use pre-commitments to ensure learning gets shipped to an audience on a timeline, not just kept as private intention.
- 4
Instrument notes for retrieval using search, properties, links, tags, bookmarks, and visual graph views; publish externally to make progress externally observable.
- 5
Monitor continuously by listening for responses and using community interaction to generate more signals over time.
- 6
Build a culture of constructive critique (“ritual dissent”) so feedback is safe, kind, and specific—improving explanations and coverage.
- 7
Refactor by remixing modular note “Lego blocks” and updating ideas as understanding evolves, treating notes as continuously improving rather than finished products.