Code Evolved More in 60 Years than Writing did in 5,000—Now AI is About to Rewrite Writing
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Code evolved quickly because it was designed to work with computers, enabling practices like testing, staging, and CI/CD that naturally scale with compute.
Briefing
Writing on computers has barely changed for decades: it’s mostly “bolted on” to systems built for code. Code, by contrast, was designed to work with computers from the start, so it naturally evolved alongside better hardware and software practices—testing, staging, CI/CD, Git workflows, and DevOps. That difference matters now because machines can finally handle natural language with enough depth and semantic range to make writing behave more like a computable, evolvable system rather than a static document.
The core shift is from voice and chat interfaces—useful, but largely a re-skin of older human habits—toward AI-native writing tools that treat text as something that can be engineered, verified, and deployed. The argument isn’t that chatbots will instantly replace writing. Instead, AI should add “optionality” and leverage: generating multiple variants quickly, making those variants easy to manage, and enabling workflows that resemble software development.
A practical example is the proposal to think of documents like production code. Instead of a simple draft/final pipeline, writing would include distinct steps for clarity and coherence checks, fact verification, and staging before publication—mirroring development environments, testing environments, and merges in software teams. The “presentation layer” could still turn the final output into a polished report, but the underlying process would be more like engineering.
That engineering mindset also enables multi-channel deployment. If the same core message can be tuned and released to different audiences—executives, marketers, customer success—then updates become a controlled rollout rather than a one-off rewrite. The transcript frames this as newly possible because large language models made it feasible for machines to grasp the complexity of text at scale, something earlier AI systems couldn’t do reliably.
The speaker then describes a manual version of this pipeline already being used by professionals: moving from one model to another across stages. The workflow starts with drafting (e.g., using “o3” as a conceptual thinker, escalating to “o3 Pro” for harder problems), then moves to verification (using “Perplexity” to test claims), and finishes with polishing (using “Sonnet 4” via Claude). The process is likened to a dev pipeline—development, testing, staging—where writers effectively “merge” structured drafts into something ready for production.
The broader takeaway is that knowledge work already resembles software development: it cycles through drafting, review, testing, and publication. If AI-native tooling can make that loop easier—first with smaller compute and later at larger scale—writing could evolve the way code did. The transcript positions this as a major shift on the horizon, with AI-native platforms likely to reshape how documents are created, verified, and distributed across organizations.
Cornell Notes
Natural language writing has long been “bolted on” to computer systems, while code evolved alongside computers because it was designed to run, test, and deploy. With large language models, machines can now handle the semantic complexity of text well enough to make writing more computable and evolvable. The key prediction is that AI-native writing tools will add optionality (like easy multi-variant generation) and treat documents like production code—complete with development, verification, staging, and deployment steps. Rather than replacing writing with a single chatbot, the likely shift is toward workflows that mirror DevOps and software pipelines, including multi-audience rollout of a core message. Professionals are already simulating this manually by chaining models for drafting, fact-checking, and polishing.
Why does code evolve faster than natural language when both are used with computers?
What makes large language models a turning point for writing workflows?
What does “AI-native writing” mean if it’s not just a chatbot?
How would documents resemble production code in this framework?
What is the manual workflow example, and how does it map to a dev pipeline?
How does multi-audience deployment change document work?
Review Questions
- How do DevOps-style practices (testing, staging, CI/CD) translate into a document-writing workflow?
- Why does the transcript argue that voice interfaces are not a fundamentally new innovation?
- What are the risks or tradeoffs of thinking of writing as “production code” rather than a linear draft-to-final process?
Key Points
- 1
Code evolved quickly because it was designed to work with computers, enabling practices like testing, staging, and CI/CD that naturally scale with compute.
- 2
Natural language writing has long been “bolted on” to computer systems, so it hasn’t benefited from the same compute-native evolution.
- 3
Large language models make it feasible for machines to handle the semantic complexity of text, enabling more structured and verifiable writing workflows.
- 4
AI-native writing is expected to add optionality—such as easy multi-variant generation—rather than simply replacing writing with a single chatbot output.
- 5
Treating documents like production code implies distinct stages: drafting, verification (fact-checking), staging, and deployment.
- 6
Multi-channel document rollout should become more like code deployment, letting teams tune a core message for different audiences.
- 7
Professionals are already simulating an AI-native pipeline by chaining models for drafting, testing, and polishing.