Get AI summaries of any video or article — Sign up free
On Building Malleable Software In the Age of AI | Notion After Hours thumbnail

On Building Malleable Software In the Age of AI | Notion After Hours

Notion·
5 min read

Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Malleable software centers on user agency: people should be able to reshape their digital environment as they work rather than adapting to a fixed interface.

Briefing

Malleable software is about letting people reshape their digital tools as they work—so the environment fits the way they think, not the other way around. Across the conversation, the central tension is clear: modern software often locks users into rigid interfaces designed by distant teams, while the physical world naturally supports adjustment. The payoff for getting this right is practical and cultural—people can express themselves, iterate faster, and share work without friction, instead of fighting the constraints of the platform.

Early memories of “hacking” personal computing—changing Mac menu items via Resedit’s code and resource forks, using Windows utilities like Resource Hacker, or building custom game experiences—set the tone for why malleability matters. Several participants point to small, everyday tools as proof that flexibility doesn’t have to be flashy: screenshotting apps like Clean Shot, drawing tools like TL Draw, and lightweight workflows that turn sketches into something actionable. Others connect malleability to mass adoption, comparing Instagram Stories’ editor to a kind of mainstream “malleable” creation surface—where sharing is built in and the loop from making to distributing is immediate.

The discussion also highlights what gets lost as software abstractions multiply. As standards rise and products chase higher production value, it becomes easier to “vibe code” a polished output without building a real mental model of how it was produced. That creates an exit problem: once the glossy first step is done, users can’t reliably understand or control what happened. A related frustration is interoperability: common protocols can reduce everything to a lowest-common-denominator baseline. One proposed way forward is layered compatibility—start with something broadly readable (like plain text), then optionally add richer annotations that only some tools can interpret, preserving both sharing and expressive power.

A recurring theme is the need for a “ladder of fidelity,” where users can move smoothly between simple representations (to-do lists, basic notes) and more structured, powerful data models without hitting a hard ceiling. AI is framed as a bridge across that gap: it can take messy, unstructured input and automatically structure it into forms that computers can work with—reducing the need for users to pre-plan their filing system. That aligns with a broader philosophy of tool design: successful platforms often begin with familiar primitives (spreadsheets, for example) rather than trying to solve everything at once.

The conversation ends by tying malleability to responsibility and learning. Restrictive safeguards can be justified when millions of users accidentally delete irreplaceable data, but over-restriction risks a self-fulfilling cycle where people never learn how to use tools well. Participants argue that users should be treated as capable learners—because when tools assume incompetence, the software ecosystem can drift toward safer but less empowering experiences. Notion’s block-based, hierarchical editing model is presented as a concrete example of a more expressive structure, one that can ramp from documents and creativity into deeper automation when users are ready. AI then becomes less about replacing authorship and more about meeting people where their information already lives—turning personal, cared-about data into something easier to query, transform, and build upon.

Cornell Notes

Malleable software is framed as the ability for people to edit their digital environment while they work, so tools match how users think rather than forcing users into a fixed workflow. The conversation links malleability to everyday creation tools, fast sharing loops, and richer data models that don’t cap out too early. Interoperability is treated as a design tension: common protocols can flatten expressiveness, so layered compatibility is proposed—baseline formats that everyone can read, plus optional richer layers for advanced tools. AI is positioned as a bridge between messy human expression and structured computer-ready representations, helping users move up a “ladder of fidelity” without doing all the upfront structuring themselves. The overall message: empowering users to learn and reshape tools is both a product philosophy and a long-term ecosystem responsibility.

What does “malleable software” mean in practical terms, beyond a general buzzword?

It’s the idea that the software environment should be editable by the people using it. Instead of developers dictating every detail of the workspace, users should be able to reshape tools as they work—like changing how content is created, organized, and shared. The conversation contrasts this with software that “caps out” quickly, where users can reach a polished surface but can’t understand or control what’s underneath. Block-based editing and hierarchical structures are offered as one concrete example of a malleable model that supports different kinds of content and relationships.

Why do early “hacking” stories matter to the argument?

They illustrate that users naturally look for ways to bend tools to their intent. Changing a Mac’s menu items through Resedit’s code and resource fork, using Resource Hacker on Windows school software, or building custom game experiences all show curiosity about how software is constructed. Those stories support the broader claim that people want more than a fixed interface—they want agency over the environment, even if it starts with small, personal modifications.

How does the discussion handle the interoperability problem—sharing without forcing everyone into the same constraints?

It flags a core tension: common protocols can tether systems to a lowest-common-denominator baseline. A proposed solution is layered interoperability. Start with something broadly compatible (like plain text), then optionally add richer annotations that only some tools can read. That approach keeps baseline sharing intact while allowing advanced tools to preserve extra structure and meaning.

What’s the critique of “vibe coding,” and how does it connect to malleability?

The critique is that it’s easy to reach a glossy first output—often by prompting or using automation—without building a mental model of how the result was produced. The “exit” moment arrives when users can’t reliably reason about or modify the system afterward. That’s framed as a failure of true malleability: the surface looks customizable, but the underlying understanding and control don’t transfer.

How does AI fit into the “ladder of fidelity” idea?

AI is presented as a way to smooth transitions between low-fidelity and high-fidelity representations. Users can start with messy, unstructured input (notes, drafts, rough ideas) and then have AI automatically structure it into computer-interpretable forms. This reduces the need for users to pre-plan rigid schemas, while still enabling more powerful organization and automation once the user is ready.

Why do safeguards and restrictions create a learning problem?

The conversation argues that over-restriction can become self-fulfilling. If tools assume users can’t learn, they’ll be kept behind guardrails that prevent experimentation. That limits skill growth, which then justifies even more restriction. The alternative view treats users as capable learners and emphasizes toolmakers’ responsibility to help people master tools rather than permanently limiting them.

Review Questions

  1. How does layered interoperability (baseline formats plus optional richer layers) preserve both sharing and expressiveness?
  2. What does “ladder of fidelity” mean, and how does AI reduce the friction between simple and structured representations?
  3. In what ways can high production value and automation undermine users’ ability to understand and control what they build?

Key Points

  1. 1

    Malleable software centers on user agency: people should be able to reshape their digital environment as they work rather than adapting to a fixed interface.

  2. 2

    Small creation tools (drawing, screenshotting, quick sketch-to-action workflows) demonstrate malleability without requiring a full platform overhaul.

  3. 3

    Interoperability doesn’t have to mean lowest-common-denominator design; layered compatibility can keep baseline sharing while enabling richer optional features.

  4. 4

    Software often “caps out” too early, leaving users with polished outputs they can’t reason about—an issue tied to automation that bypasses mental models.

  5. 5

    A “ladder of fidelity” approach supports moving between simple representations and more structured data models without hard breaks.

  6. 6

    AI is positioned as a bridge between messy human expression and structured, computer-interpretable representations, reducing upfront structuring work.

  7. 7

    Overly restrictive safeguards can create a downward learning spiral if users never get room to experiment and build competence.

Highlights

Malleability is framed as editing the environment while working—so tools fit users’ thinking, not the other way around.
Layered interoperability is proposed as a practical compromise: plain-text compatibility as a baseline, with richer annotations that only some tools can interpret.
“Vibe coding” can produce glossy results without building a mental model, creating an “exit” where users can’t control what they made.
AI is cast as a way to smooth transitions from unstructured input to structured representations, enabling a ladder from simple notes to powerful data models.
The conversation argues that treating users as capable learners is a responsibility; otherwise safeguards can become a self-fulfilling cycle of incompetence.

Topics

  • Malleable Software
  • Interoperability
  • Block-Based Editing
  • AI Structuring
  • User Agency

Mentioned

  • Mary
  • Jeffrey
  • Slim
  • Max
  • Leonard Bernstein