Get AI summaries of any video or article — Sign up free
Any Model. Any App. Build Your AI OS to Work Everywhere. thumbnail

Any Model. Any App. Build Your AI OS to Work Everywhere.

5 min read

Based on Linking Your Thinking with Nick Milo's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Build an AI OS from portable plain-text files so workflows survive API failures, app discontinuations, and model deprecations.

Briefing

The central idea is to build an “AI OS” that stays yours even when AI tools change—by keeping identity, knowledge structure, and repeatable processes in plain-text files instead of inside any single model or app. The motivation is practical: when an API breaks, a vendor pivots, or a favorite model gets deprecated, workflows built on someone else’s infrastructure can collapse. File-based systems avoid that lock-in by making the core of the workflow portable and durable.

The approach starts with a layered architecture. At the center sits an “ideaverse”: a personal universe of notes, ideas, connections, and projects stored as simple text files that can be opened and edited in any app that reads plain text. The ideaverse is organized into three sections—Atlas for knowledge, Calendar for time-related notes, and Efforts (ACE) for active projects. The more useful and well-structured this core becomes, the more capable any AI becomes when it works with it.

Layer two is the connective tissue: “maps and manuals,” a small set of core files that let AI navigate the ideaverse efficiently. Without maps, pointing an AI at a large vault can be slow and token-heavy because the system has to scan too much context. With maps, AI can jump to relevant files, skip the rest, and build context quickly. These maps are designed to be tool-agnostic translation layers—so the same underlying knowledge can be handed to different AI tools without rebuilding everything.

Two core map files anchor the system. The first is “me.md,” a portable identity file that answers “who am I?” It includes instructions about how the user wants AI to work: first-name addressing, preferred tone (“casual,” “collaborator” rather than “tool”), working rules, and pointers to other map files via clear file paths. The second is the “vault map,” a master table of contents and manual that explains how the ideaverse is structured—what’s in Atlas, what’s in Calendar, what lives in Efforts, and how note types should be created, named, and placed. Keeping identity separate from the vault structure prevents constant rewriting when projects change.

A third layer rounds out the system: “skills,” which are also plain-text markdown files. Skills document repeatable processes—like how to generate a morning briefing from multiple sources or how to structure daily notes—so any AI can follow the same workflow. Skills are treated as mini-maps of procedure rather than app-specific tools, which makes them portable across AI environments.

The transcript also emphasizes maintenance and privacy. Periodically, users should “garden” their maps by checking whether me.md and the vault map have become bloated and adjusting as the ideaverse evolves. Sharing with AI tools may require trusting some third-party layer for up-to-date capabilities, but the file-based design aims to make a future shift to local or open-source models straightforward. Finally, it recommends marking AI-generated content clearly (tags, emojis, or folders), backing up everything, and keeping deeply personal material—like journals—outside AI access when needed.

Cornell Notes

The transcript lays out a “file-based AI OS” designed to prevent lock-in when AI tools, APIs, or models change. The system centers on an ideaverse—plain-text notes organized into Atlas (knowledge), Calendar (time), and Efforts/ACE (projects). Two key map files connect that ideaverse to any AI: me.md (portable identity and interaction preferences) and a vault map (a manual for navigating and creating notes in the ideaverse). A third layer, skills, stores repeatable processes as markdown documents so any AI can execute workflows consistently. This matters because it keeps your knowledge structure and instructions portable, enabling faster context building and easier migration to new or local AI tools.

Why does “file over AI” matter more than “file over app” in a world of changing models and APIs?

“File over app” means notes remain accessible if an app disappears because they’re plain text. “File over AI” goes further: it treats the AI workflow itself—identity instructions, navigation structure, and repeatable processes—as portable files. That way, if an AI tool goes offline, an API breaks, or a model gets deprecated, the core instructions and knowledge structure still exist in your folders and can be handed to a different AI tool without rebuilding everything.

What are the three layers of the proposed AI OS, and what role does each play?

Layer 1 is the ideaverse: the user’s knowledge and projects stored as simple text files, organized into Atlas, Calendar, and Efforts (ACE). Layer 2 is maps and manuals: a small set of core files that help AI navigate the ideaverse efficiently instead of scanning a huge vault. Layer 3 is tools: the apps and models used to work with the files (e.g., Obsidian, Claude Cowwork, NotebookLM, OpenClaw, Claude Code, Gemini). Layer 3 changes often; Layers 1 and 2 are meant to stay stable.

How do the two core map files—me.md and the vault map—reduce friction when switching AI tools?

me.md is a portable identity file that tells any AI who the user is and how to interact (tone, addressing style, collaboration preferences, and rules). It also uses file paths to point to other map files. The vault map is a manual for the ideaverse’s structure: it explains what’s in Atlas, Calendar, and Efforts, and how AI should create notes (note types, naming conventions, and where files should go). Because these are plain markdown files, they can be reused across different AI tools without rewriting identity or structure each time.

What does “maps” do for AI performance when dealing with large note collections?

Maps act like an index and navigation plan. Without them, an AI pointed at a large vault (e.g., tens of thousands of interconnected notes) struggles to build context efficiently because it must scan too much, costing tokens and time. With maps, AI can identify which files are relevant, isolate them, and skip the rest—enabling faster, more targeted context building.

How are “skills” different from app-specific tools, and why are they stored as markdown files?

Skills are markdown documents that describe repeatable processes (e.g., how to compile a holistic morning briefing from multiple sources, or how to structure daily notes). They’re not tied to a specific app’s workflow logic. Because they’re plain text, they travel with the user and can be read by any AI tool that accepts text, letting different models follow the same procedures consistently.

What privacy and future-proofing strategy does the system recommend?

Using frontier models and cloud tools may require trusting some third-party layer with data, especially for up-to-date capabilities. The file-based design aims to make migration easier: when local or open-source models become viable, the same ideaverse and maps can be used with minimal restructuring. It also recommends being intentional about what AI can access, keeping deeply personal areas like journals outside AI access when appropriate, backing up files, and clearly marking AI-generated content (tags, emojis, or folders).

Review Questions

  1. If you had to switch from one AI tool to another tomorrow, which two files in this system would you rely on first, and what does each file accomplish?
  2. How do maps reduce token usage and time when working with a very large note vault?
  3. Why are “skills” treated as markdown documentation rather than as part of the tool layer?

Key Points

  1. 1

    Build an AI OS from portable plain-text files so workflows survive API failures, app discontinuations, and model deprecations.

  2. 2

    Organize the ideaverse into Atlas (knowledge), Calendar (time), and Efforts/ACE (projects) so AI can work with a stable structure.

  3. 3

    Create me.md to define portable identity and interaction preferences, including clear file-path pointers to other map files.

  4. 4

    Create a vault map as a master manual for navigating and creating notes in the ideaverse, including note types, naming, and placement rules.

  5. 5

    Store repeatable workflows as markdown “skills” so any AI tool can execute the same processes consistently.

  6. 6

    Maintain the system like a garden: periodically review me.md and the vault map for bloat and update them as priorities and structures change.

  7. 7

    Protect privacy by limiting what AI can access, backing up files, marking AI-generated content clearly, and preparing for a shift toward local/open-source models when feasible.

Highlights

The system’s durability comes from treating identity, navigation, and procedures as plain-text files—so switching AI tools doesn’t erase the workflow.
Maps and manuals are framed as a performance upgrade: they let AI jump to relevant notes instead of scanning massive vaults.
me.md and the vault map separate “who I am” from “how my ideaverse is structured,” preventing constant rewrites when projects change.
Skills are mini-maps of process stored as markdown, making repeatable workflows portable across different AI environments.

Topics

  • AI OS
  • File Portability
  • Obsidian
  • Context Navigation
  • Prompt Identity