Get AI summaries of any video or article — Sign up free
I connected Claude Code to Obsidian and it made me 10x more productive thumbnail

I connected Claude Code to Obsidian and it made me 10x more productive

Noah Vincent·
6 min read

Based on Noah Vincent's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Generic AI output often stems from context resets, not from prompt quality alone.

Briefing

The core breakthrough here is treating Claude Code as an agent that lives *inside* a creator’s own Obsidian “second brain,” so every session starts with real, persistent context—project structure, writing rules, and prior decisions—rather than a blank slate. That shift matters because generic AI output often isn’t a prompt problem; it’s a context problem. When the system can read the vault from the first prompt onward, it can write in the user’s voice, follow established standards, and compound improvements over time.

A major pain point is that most AI workflows reset each chat: users reintroduce who they are, what they’re working on, and what background matters. Even with carefully crafted prompts, the model still lacks continuity and tends to produce generic results. The transcript argues that built-in “memory” features in tools like Claude and similar agents don’t solve this reliably because they store arbitrary snippets without hierarchy or editorial control—mixing irrelevant details into future work and sometimes inserting old journal fragments into new research or content.

The proposed fix is architectural: plug the AI directly into the knowledge system. Obsidian provides the second brain as local Markdown files in a folder on the computer, keeping data portable and human-readable. Claude Code then runs from the terminal and gains full access to the vault folder, with no chat-window limitations and no isolated conversation state. Instead of relying on opaque memory, the system uses two explicit Markdown files that Claude Code reads on startup.

At the center is `clo.md`, described as the “brain of the brain.” It stores identity, projects, system architecture, writing rules, and guidelines. Claude Code can update this file as preferences and standards evolve, and it can be meta-optimized—asking for changes to the structure so future outputs improve. The transcript also notes that `clo.md` can exist at multiple levels: a root version for the whole vault and additional versions inside frequently used folders (like SOP directories) so the agent loads the right context when it navigates.

Complementing that is `memory.md`, framed as a session log or continuity layer. After each work session, Claude Code records key decisions, patterns, and actions so it can resume where it left off next time. Over repeated sessions, the context becomes increasingly personalized, turning the setup into a compounding operating system rather than a one-off assistant.

Practical payoff is shown through content workflows: Claude Code can search a large Zettelkasten-style network of permanent notes, extract relevant concepts, and draft a newsletter using a consistent IDA structure and the user’s formatting rules. It can also create “skills” (repeatable SOP-based commands) so workflows become one-line actions like `/skillname`. The transcript demonstrates turning a newsletter workflow into a reusable skill that later runs with different inputs.

Finally, the system extends beyond the vault using MCP (Model Context Protocol) to connect Claude Code to external tools such as task management and Tana capture/voice-note systems. That enables pipelines like fetching the latest voice note, translating/formatting it, and saving a newsletter draft back into the vault.

The tradeoffs are clear: terminal comfort is required, initial setup takes time, and security risks rise with powerful file access and integrations. The transcript highlights prompt injection and supply-chain attacks as key threats, recommending restricted permissions, disabling auto-updates for MCP components, and limiting access to the vault folder. Cost is positioned as favorable: Obsidian is free, Claude Code runs on a Claude Pro subscription, and MCP servers can be self-hosted, avoiding per-token API pricing that can make other AI productivity tools expensive for heavy use.

Cornell Notes

The setup pairs Claude Code with an Obsidian “second brain” so the agent reads the user’s vault context on every session. Instead of relying on unreliable built-in memory that saves random snippets, it uses explicit Markdown context files: `clo.md` for identity, projects, rules, and architecture, and `memory.md` for session continuity (decisions, patterns, actions). Because Claude Code can access and modify files inside the vault, it can draft content from real notes, maintain consistent formatting, and improve over repeated sessions as the context files evolve. The workflow can be turned into reusable “skills” (SOP-like commands) and extended via MCP to pull from external tools like task management and Tana capture/voice notes. The result is persistent, controllable context that compounds productivity—at the cost of terminal setup and added security considerations.

Why does “generic AI output” happen even when prompts are good?

The transcript attributes it to context resets: each new chat starts from zero, forcing users to reintroduce identity, goals, and background. Without accumulated context, the model defaults to generic responses. It also argues that prompt tweaking can’t fully fix missing context—if the agent doesn’t know who the user is, what they’re building, and what prior work should be referenced, outputs remain generic.

What’s wrong with built-in memory features, and how does this system avoid that?

Built-in memory is described as broken because it saves arbitrary snippets automatically with no hierarchy, structure, or editorial judgment. That can mix projects together, store irrelevant details, and even insert old journal fragments into new content. The alternative is to keep context in human-readable, editable Markdown files inside the vault, so what gets used is controlled and structured.

What are `clo.md` and `memory.md`, and what does each one do?

`clo.md` is the “brain of the brain.” Claude Code reads it automatically at startup and uses it to load identity, projects, system architecture, writing rules, and guidelines; it can also update it when preferences change. `memory.md` acts as session continuity: it records key decisions, patterns, and actions after each session so the agent can resume where it left off next time. Together they provide persistent context without opaque memory.

How does the system turn one-off workflows into repeatable actions?

Claude Code can create “skills” by generating SOP-like command files. After completing a workflow, the user asks Claude Code to convert it into an SOP/skill, then runs it later using a slash command like `/skillname`. The transcript’s example includes a newsletter workflow that becomes a reusable agent with steps such as searching the vault, presenting notes for confirmation, mapping a narrative arc, and writing formatted output.

What does MCP add beyond vault-only automation?

MCP (Model Context Protocol) lets Claude Code connect to external tools in real time, not just the local Obsidian vault. The transcript gives examples: reading task priorities from a task management app, pulling capture items and voice notes from Tana (including tags like YouTube or newsletter), and using those inputs to generate newsletter drafts saved back into the vault.

What security risks come with this kind of agent access, and what mitigations are recommended?

Two risks are emphasized: prompt injection (malicious instructions hidden in web content or text that the agent might follow) and supply-chain attacks (an MCP component could be updated later with malicious code). Mitigations include creating MCP locally on the user’s own computer, disabling auto-updates for MCP, and restricting Claude Code permissions so it can access only the vault folder—not the root of the computer.

Review Questions

  1. How do `clo.md` and `memory.md` differ in purpose, and why does that distinction matter for output quality?
  2. What mechanisms in this system replace the need for re-explaining context every chat session?
  3. Where do MCP integrations fit in the workflow, and what security controls are necessary when connecting external tools?

Key Points

  1. 1

    Generic AI output often stems from context resets, not from prompt quality alone.

  2. 2

    Opaque built-in memory can store irrelevant snippets; controllable, structured vault context is more reliable.

  3. 3

    Obsidian provides a portable, human-readable local Markdown knowledge base that Claude Code can read and write directly.

  4. 4

    `clo.md` centralizes identity, projects, rules, and architecture; `memory.md` preserves session continuity via key decisions and actions.

  5. 5

    Claude Code can convert completed workflows into reusable “skills” (SOP-like commands) that run with slash syntax.

  6. 6

    MCP extends automation by letting Claude Code access external tools like task managers and Tana capture/voice notes in real time.

  7. 7

    Security requires restricting file access, guarding against prompt injection, and reducing supply-chain risk by controlling MCP updates and sources.

Highlights

The system’s biggest claim is that “context is everything”: persistent vault context beats generic chat memory for consistent, personalized writing.
`clo.md` and `memory.md` replace black-box memory with explicit, editable Markdown files that Claude Code reads on startup.
Workflows can be promoted into reusable skills, turning multi-step drafting into one-line commands like `/skillname`.
MCP enables end-to-end pipelines that pull from Tana voice notes or task priorities and write finished drafts back into the vault.
Power comes with risk: prompt injection and supply-chain attacks are treated as real threats, not edge cases.

Topics

  • Claude Code
  • Obsidian Second Brain
  • Persistent Context
  • Skills and SOPs
  • MCP Integrations

Mentioned