Any Model. Any App. Build Your AI OS to Work Everywhere.
Based on Linking Your Thinking with Nick Milo's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Build an AI OS from portable plain-text files so workflows survive API failures, app discontinuations, and model deprecations.
Briefing
The central idea is to build an “AI OS” that stays yours even when AI tools change—by keeping identity, knowledge structure, and repeatable processes in plain-text files instead of inside any single model or app. The motivation is practical: when an API breaks, a vendor pivots, or a favorite model gets deprecated, workflows built on someone else’s infrastructure can collapse. File-based systems avoid that lock-in by making the core of the workflow portable and durable.
The approach starts with a layered architecture. At the center sits an “ideaverse”: a personal universe of notes, ideas, connections, and projects stored as simple text files that can be opened and edited in any app that reads plain text. The ideaverse is organized into three sections—Atlas for knowledge, Calendar for time-related notes, and Efforts (ACE) for active projects. The more useful and well-structured this core becomes, the more capable any AI becomes when it works with it.
Layer two is the connective tissue: “maps and manuals,” a small set of core files that let AI navigate the ideaverse efficiently. Without maps, pointing an AI at a large vault can be slow and token-heavy because the system has to scan too much context. With maps, AI can jump to relevant files, skip the rest, and build context quickly. These maps are designed to be tool-agnostic translation layers—so the same underlying knowledge can be handed to different AI tools without rebuilding everything.
Two core map files anchor the system. The first is “me.md,” a portable identity file that answers “who am I?” It includes instructions about how the user wants AI to work: first-name addressing, preferred tone (“casual,” “collaborator” rather than “tool”), working rules, and pointers to other map files via clear file paths. The second is the “vault map,” a master table of contents and manual that explains how the ideaverse is structured—what’s in Atlas, what’s in Calendar, what lives in Efforts, and how note types should be created, named, and placed. Keeping identity separate from the vault structure prevents constant rewriting when projects change.
A third layer rounds out the system: “skills,” which are also plain-text markdown files. Skills document repeatable processes—like how to generate a morning briefing from multiple sources or how to structure daily notes—so any AI can follow the same workflow. Skills are treated as mini-maps of procedure rather than app-specific tools, which makes them portable across AI environments.
The transcript also emphasizes maintenance and privacy. Periodically, users should “garden” their maps by checking whether me.md and the vault map have become bloated and adjusting as the ideaverse evolves. Sharing with AI tools may require trusting some third-party layer for up-to-date capabilities, but the file-based design aims to make a future shift to local or open-source models straightforward. Finally, it recommends marking AI-generated content clearly (tags, emojis, or folders), backing up everything, and keeping deeply personal material—like journals—outside AI access when needed.
Cornell Notes
The transcript lays out a “file-based AI OS” designed to prevent lock-in when AI tools, APIs, or models change. The system centers on an ideaverse—plain-text notes organized into Atlas (knowledge), Calendar (time), and Efforts/ACE (projects). Two key map files connect that ideaverse to any AI: me.md (portable identity and interaction preferences) and a vault map (a manual for navigating and creating notes in the ideaverse). A third layer, skills, stores repeatable processes as markdown documents so any AI can execute workflows consistently. This matters because it keeps your knowledge structure and instructions portable, enabling faster context building and easier migration to new or local AI tools.
Why does “file over AI” matter more than “file over app” in a world of changing models and APIs?
What are the three layers of the proposed AI OS, and what role does each play?
How do the two core map files—me.md and the vault map—reduce friction when switching AI tools?
What does “maps” do for AI performance when dealing with large note collections?
How are “skills” different from app-specific tools, and why are they stored as markdown files?
What privacy and future-proofing strategy does the system recommend?
Review Questions
- If you had to switch from one AI tool to another tomorrow, which two files in this system would you rely on first, and what does each file accomplish?
- How do maps reduce token usage and time when working with a very large note vault?
- Why are “skills” treated as markdown documentation rather than as part of the tool layer?
Key Points
- 1
Build an AI OS from portable plain-text files so workflows survive API failures, app discontinuations, and model deprecations.
- 2
Organize the ideaverse into Atlas (knowledge), Calendar (time), and Efforts/ACE (projects) so AI can work with a stable structure.
- 3
Create me.md to define portable identity and interaction preferences, including clear file-path pointers to other map files.
- 4
Create a vault map as a master manual for navigating and creating notes in the ideaverse, including note types, naming, and placement rules.
- 5
Store repeatable workflows as markdown “skills” so any AI tool can execute the same processes consistently.
- 6
Maintain the system like a garden: periodically review me.md and the vault map for bloat and update them as priorities and structures change.
- 7
Protect privacy by limiting what AI can access, backing up files, marking AI-generated content clearly, and preparing for a shift toward local/open-source models when feasible.