Get AI summaries of any video or article — Sign up free
McKinsey Says $1 Trillion In Sales Will Go Through AI Agents. Most Businesses Are Invisible. thumbnail

McKinsey Says $1 Trillion In Sales Will Go Through AI Agents. Most Businesses Are Invisible.

5 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Agent-driven commerce depends on whether company systems are agent readable and agent writable, not on chatbot quality alone.

Briefing

AI agents are on track to drive a massive share of commerce—McKinsey projects up to $1 trillion in orchestrated revenue by 2030 in the US retail market—but the real bottleneck isn’t agent intelligence. It’s whether companies’ systems are “agent readable and agent writable,” meaning agents can reliably discover, evaluate, and transact using the underlying transactional and data infrastructure.

For years, businesses built “anti-bot” defenses to protect human experiences and keep automated traffic out. Those same barriers now risk blocking the very customer interactions agents will soon dominate. The shift is already visible: vendors that once tried to lock down access are being forced to reconsider as agent-driven shopping and orchestration becomes mainstream. The core claim is blunt—personal AI platforms and enterprise agent frameworks only work at scale if the entire ecosystem of company systems can be consumed and acted on by agents, not just through chat interfaces, but through the transactional layer that powers product discovery, pricing, shipping promises, returns, and checkout.

That ecosystem change is hard because it requires deep internal restructuring of data stacks. Adding an MCP server or wrapping an existing API is treated as “low-hanging fruit,” not a full solution. Agents need structured, trustworthy schemas and secure access to the right slices of data. Without clean, consistent data across the stack, agents either skip offers or produce unreliable results—leading to lost sales even when the underlying product is strong. The transcript draws a parallel to earlier personalization efforts at Prime Video: without clean data, customer experiences collapse. With agents, the stakes rise because the agent must operate at scale and with precision.

Several examples illustrate the difficulty. Stripe is cited as an early adopter that shipped an MCP server, but the challenge appears when deeper analytics (like Sigma queries returning large CSVs) can’t simply be poured into an agent context window. The solution requires intermediary storage and secure, queryable database structures—plus careful security and authentication controls at Stripe’s scale. On the other side, SAP is described as announcing an MCP server for commerce cloud while the broader SAP portfolio remains far from “agent readable by default,” implying multi-quarter work for most installations.

The transcript also argues that common executive misconceptions will slow progress. Winning won’t come from optimizing for agent “discovery” the way search engines do; agents evaluate structured data against explicit constraints rather than browsing ranked lists. Complexity isn’t a reason to avoid schemas—agents benefit from structured access because it helps customers optimize decisions they can’t evaluate manually. Trust is framed as a spectrum: agents start with narrow delegated tasks and earn broader permissions over time.

Finally, the most important strategic point is that businesses must encode higher-order intent into data. Humans can ask for vague, emotionally loaded product attributes (“the basketball like the one used in March Madness,” or coffee tied to a specific farm and school). If those meanings live only in marketing copy or tribal knowledge, agents can’t verify them and transactions fail. The prescription: benchmark competitors by attempting agent-mediated transactions, then invest in making data architecture agent-first. The payoff is twofold—agents can transact, and humans benefit from the same clean data through better, more personalized experiences.

Cornell Notes

AI agents are projected to generate enormous commerce revenue, but transactions will only work when companies’ systems are “agent readable and agent writable.” That means agents must be able to discover, evaluate, and transact using structured, trustworthy data—not just chat with a bot. The hard part is internal data-stack change: schemas, secure access, and database-style intermediaries that let agents pull the right slices of information without overloading context windows. Stripe’s MCP approach illustrates both progress and the deeper engineering needed, while SAP’s broader portfolio shows how far many incumbents still are. The strategic takeaway: encode higher-order intent (shipping promises, authenticity, provenance, outcomes) into data so agents can answer and act reliably.

What does “agent readable and agent writable” actually require beyond having an AI chatbot?

It requires that the transactional infrastructure and underlying company data can be consumed and acted on by agents. The transcript emphasizes that it’s not enough for messaging to work (e.g., “message your bot” across platforms). Agents must be able to discover offers, evaluate them against constraints, and complete actions like checkout, refunds, subscriptions, and shipping/returns decisions using structured data that the agent can reliably read and write against.

Why is wrapping an existing API or adding an MCP server not sufficient?

Because real business data often can’t be dumped into an agent context window. The transcript’s Stripe example highlights this: Sigma can return large CSVs with no practical query limit, but loading that output directly into an agent’s context fails. The workaround requires intermediary storage (database/table structures), secure authentication, and careful permissioning so agents can pull the right slices safely.

How do anti-bot defenses become a problem in an agent-commerce world?

Anti-bot architecture was built to keep automation out to protect human experiences. The transcript argues that those same fences now block agents from reaching the “most valuable customers.” As agent-mediated shopping grows, companies that keep walls up risk being skipped entirely—agents will bypass offers when delivery windows, shipping costs, returns, or product schemas are unclear.

What misconceptions could derail companies trying to become agent-ready?

Four are called out. (1) Optimizing like search—agents don’t browse ranked lists; they evaluate structured data against explicit constraints. (2) Assuming schemas only fit simple products—complex businesses benefit more because agents help customers optimize across many variables. (3) Treating trust as binary—trust is a spectrum that expands as agents earn permission through narrow delegated tasks. (4) Waiting and seeing—data cleanup and interface enablement take months to quarters, and the market can shift quickly.

Why does “higher-order intent” matter more than basic product attributes?

Agents will handle vague human requests that map to real-world meanings not captured by basic metadata. The transcript’s basketball example shows that “basketball” isn’t enough; the agent must find the exact type used in a specific tournament. Similarly, coffee authenticity and social impact details (farm, processing, sourcing, support for a school) must be encoded in agent-readable data rather than living only in marketing copy or packaging.

How does the transcript connect agent readiness to competitive strategy?

It recommends benchmarking: test top competitors and one’s own systems by attempting agent-mediated transactions using Claude or ChatGPT, then measure how far the agent can get and how hard it is to extract data. Companies should identify whether an MCP connector is merely a start or truly enables end-to-end transactions, and whether they can lead if competitors are weak.

Review Questions

  1. What engineering and security steps are implied when an agent needs access to deeper analytics beyond what an MCP wrapper can deliver?
  2. How does agent evaluation differ from search ranking, and why does that change what “winning” looks like?
  3. Give an example of higher-order intent a customer might ask for and explain what data would need to be agent-readable for the agent to transact successfully.

Key Points

  1. 1

    Agent-driven commerce depends on whether company systems are agent readable and agent writable, not on chatbot quality alone.

  2. 2

    Anti-bot defenses and locked-down product ecosystems can block agents from reaching customers and cause offers to be skipped.

  3. 3

    Making systems agent-ready requires internal data-stack restructuring—schemas, secure access, and database-style intermediaries—not just MCP wrappers.

  4. 4

    Agents evaluate structured constraints rather than browsing ranked lists, so ad-budget-driven discovery strategies won’t translate directly.

  5. 5

    Complex businesses benefit from structured data because agents help customers optimize decisions they can’t easily evaluate manually.

  6. 6

    Trust in agent commerce expands gradually through narrow delegated tasks, so companies should design for a trust spectrum rather than a single “buy” moment.

  7. 7

    Encoding higher-order intent (provenance, shipping promises, tournament-specific items, social impact) into data is essential for reliable transactions.

Highlights

McKinsey’s $1 trillion-by-2030 projection hinges on orchestration by agents, but the transcript argues the real limiter is agent-readable transactional infrastructure.
Stripe’s MCP progress still runs into context-window limits when deeper analytics outputs (like large CSVs) can’t be naively loaded—requiring secure intermediary data structures.
Agents won’t “browse” search results; they compute answers from structured data against explicit constraints, making clean schemas a competitive advantage.
Trust is not binary in agent commerce; it widens as agents earn permission through longer delegated workflows.
The hardest work is encoding higher-order intent into data so agents can verify what humans actually mean, not just what humans type.

Topics

Mentioned

  • MCP