Get AI summaries of any video or article — Sign up free
Independent Thinking in the Age of AI | PKM Summit 2026 thumbnail

Independent Thinking in the Age of AI | PKM Summit 2026

6 min read

Based on Zsolt's Visual Personal Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI’s biggest risk to individuals is agency loss—tools that predict or draft can replace personal reasoning before it’s formed.

Briefing

AI’s biggest threat to thoughtful people isn’t job loss—it’s the erosion of personal agency: the risk that machines do the thinking while individuals drift into accepting AI-generated conclusions. The talk frames that danger through a simple question—how can people retain agency when prediction and generation tools increasingly supply answers before anyone has fully formed them? It ties the concern to a broader life problem: how to choose a next direction when success and stability leave room for doubt, echoing Dante’s restlessness and Steve Jobs’ warning not to live by other people’s thinking.

A central thread runs through everyday examples. Gmail’s sentence predictions raise a practical dilemma: when AI completes a thought, does the person still “drive,” or does the system steer the train of thought? The same concern intensifies when AI drafts entire emails—approval at the end may hide the fact that the underlying reasoning may not have been personally constructed. That leads to a second claim: text is a poor medium for how human minds work. Reading is linear and word-by-word, forcing the brain to stitch meaning together while constantly checking fit with prior knowledge. The talk contrasts that with visual recognition, where pattern detection can be near-instant—like spotting an “odd one out” in a diagram faster than a text-based AI response.

Evolutionary psychology supplies the rationale. Human cognition is portrayed as “navigation” rather than abstract contemplation: ancestors survived by tracking space, movement, and concrete cues, supported by mechanisms such as mirror neurons that simulate observed actions. From there, the talk argues that visual thinking creates a feedback loop: drawing an understanding lets people verify whether their mental model matches what’s on the page, making comprehension and questioning easier. Research on memory is used to bolster the point—concrete nouns tend to stick better than abstract concepts, and illustrations help convert abstraction into something graspable.

To make the case tangible, the talk uses two games. In the “game of 15,” adults struggle when numbers must be juggled mentally, but the same problem becomes trivial when the layout is visualized. A magic square demonstrates the same idea: representing the problem so structure is visible turns a hard search into an obvious pattern. A quote from Saul Emmerich frames the principle—solving means representing the problem so the solution becomes transparent. The talk extends this into a method: try multiple representations through iterative experimentation until a better tool reveals a path that wasn’t obvious at first.

The talk then targets “lazy brains,” aligning with System 1 thinking from Daniel Kahneman’s framework: fast, reactive pattern-matching that often jumps to conclusions. To counter it, it recommends injecting exploration between seeing and judging—collecting additional data points and using notes to overcome short-term memory limits. But notes shouldn’t be only linear text. Diagrams are presented as “visual prosthetics,” akin to switching from a Walkman’s linear playback to iTunes’ whole-library view.

Finally, the talk addresses a paradox: people naturally organize physical belongings when packing, yet default to text when organizing ideas. The proposed fix is to build a library of “visual primitives”—spatial layouts and diagram types that support thinking, not just decoration. It demonstrates this with circle maps and a double bubble map to compare perspectives in a real dilemma: quitting a day job to start a meaningful business. By mapping what each person values and fears, differences become explicit and empathy becomes structured.

The closing pitch is practical PKM design. Traditional note systems are described as one-dimensional (individual pages) or two-dimensional (folders and linear text), while modern linked-note tools still present a wall of text. The talk promotes “4D visual PKM,” where visuals lead and text becomes metadata—using tools and workflows centered on Obsidian and an “Excalidraw”-style visual-first approach. The goal is independent thinking: keep agency by choosing representations that make reasoning visible, verifiable, and personally owned.

Cornell Notes

The talk argues that AI’s most serious risk is not unemployment but loss of agency—people may accept AI-generated completions and drafts without doing the thinking themselves. It links that risk to a deeper mismatch between how humans process information and how text forces it to be processed: reading is linear and effortful, while cognition evolved for spatial, visual navigation and fast pattern recognition. Visual representations can create a feedback loop that makes understanding more verifiable and problem-solving more transparent, as shown by games where a visual layout turns a hard mental task into an easy one. To counter “lazy brain” System 1 snap judgments, the talk recommends inserting exploration between seeing and judging and using notes/diagrams to hold more than short-term memory can manage. The practical takeaway is to build a library of visual primitives and use visual-first PKM (visual diagrams with text as metadata) to support independent thinking.

How does AI prediction in everyday tools threaten personal agency, even when outputs look correct?

The talk uses Gmail’s autocomplete as a concrete example: if AI predicts how a sentence should end, the person may be “nodding along” rather than completing a personally authored train of thought. The same concern applies when AI generates an entire email—approval at the end can mask that the reasoning and drafting process may have been outsourced. The core agency question becomes whether the person would have produced a different completion or structure if AI hadn’t suggested it.

Why does the talk claim text is a weaker medium for thinking than visuals?

Text is described as “keyhole” viewing: information arrives one word at a time, forcing the mind to stitch meaning together and continuously test how new pieces fit existing knowledge. The talk contrasts that with visual recognition, where pattern detection (like spotting an “odd one out” in a diagram) can be instantaneous. The argument is that humans are hardwired for visual/spatial processing, so diagrams align better with how comprehension happens.

What evolutionary and neuroscience ideas are used to justify visual thinking?

The talk frames cognition as navigation and movement simulation rather than abstract reading. It invokes the idea of a “mind on the hoof” (thinking as on-the-move survival) and cites mirror neurons: when observing someone’s motion, the brain simulates that movement internally to make sense of it. It also references a model where thinking evolved as simulation of movement, supporting the claim that spatial visual thinking is natural and powerful.

How do the “game of 15” and magic square illustrate the power of representation?

In the “game of 15,” players must pick numbers without repetition to make any three sum to 15. When numbers are handled mentally, even adults struggle because they must juggle many possibilities in short-term memory. When the same problem is represented with a different visual layout (circles vs axes), the winner becomes obvious quickly—so the representation makes the solution transparent. The magic square reinforces this: every row, column, and diagonal sums to 15, so the structure is visible rather than hidden in calculation.

What does the talk recommend to counter System 1 snap judgments?

System 1 is portrayed as reactive pattern-matching that jumps from seeing to judging. The countermeasure is to inject exploration between those steps: gather additional data points before concluding. Because short-term memory holds only about 7±2 items, the talk recommends externalizing information through notes—so exploration doesn’t collapse under memory limits.

How do circle maps and a double bubble map help with dilemmas involving other people?

The talk demonstrates a structured empathy method. Two circle maps are created for the same dilemma (quitting a day job to start a business): one from the aspiring entrepreneur’s frame (e.g., creative control, freedom, being one’s own boss, excitement plus risk) and one from the wife’s frame (pride plus fears like losing weekends/holidays, higher home stress, unpredictable schedule, and possibly needing to work more). Those maps feed into a double bubble map: similarities go in the center, while differentiators on each side are paired to reveal what each person gains or loses—making differences explicit and easier to discuss.

Review Questions

  1. What kinds of AI behaviors (prediction vs full generation) most directly threaten agency, and how could a person test whether their own reasoning is still driving the outcome?
  2. Explain how visual representation changes problem-solving difficulty in the “game of 15” example. What mental bottleneck does the visual layout bypass?
  3. How does the talk connect System 1 thinking to the need for exploration, and why does short-term memory make notes (or diagrams) necessary?

Key Points

  1. 1

    AI’s biggest risk to individuals is agency loss—tools that predict or draft can replace personal reasoning before it’s formed.

  2. 2

    Prediction features (like Gmail autocomplete) raise a practical test: would the person choose a different completion without AI suggestions?

  3. 3

    Text forces linear, word-by-word processing that can be slower and less aligned with how cognition evolved for spatial navigation.

  4. 4

    Visual representations can create a feedback loop that verifies whether a mental model matches reality on the page.

  5. 5

    Problem-solving improves when representations make structure transparent; iterative experimentation helps discover better tools.

  6. 6

    Countering snap judgments requires inserting exploration between seeing and judging, supported by notes to overcome short-term memory limits.

  7. 7

    Independent thinking can be strengthened by building a library of visual primitives and using visual-first PKM where diagrams lead and text becomes metadata.

Highlights

The talk frames AI’s primary threat as losing agency, not losing jobs—because AI can do the thinking while people accept the output.
A key claim: humans are built for visual/spatial navigation, so diagrams can outperform text for comprehension and pattern detection.
Games and the magic square demonstrate the same principle: when structure is visible, solutions become transparent and fast.
System 1 snap judgments can be slowed by injecting exploration between seeing and judging, then externalizing data to beat short-term memory limits.
The practical method pairs circle maps with a double bubble map to compare perspectives and empathize in real-life dilemmas.

Topics

  • Independent Thinking
  • Agency and AI
  • Visual Cognition
  • System 1 vs System 2
  • Visual PKM

Mentioned