Get AI summaries of any video or article — Sign up free
NextJS Is Hard To Self Host thumbnail

NextJS Is Hard To Self Host

The PrimeTime·
5 min read

Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Next.js self-hosting is difficult when features depend on infrastructure behaviors (edge/CDN orchestration and caching semantics), not just framework code.

Briefing

Self-hosting Next.js is hard not because it’s “maliciously” designed to resist Docker, but because Next.js blurs the line between framework features and tightly coupled infrastructure behaviors—especially when performance-critical features depend on edge/CDN orchestration and cache semantics. The core point is that Next.js can solve certain problems inside the framework or by leveraging infrastructure, and Vercel’s platform is uniquely positioned to make the best-performing option work end-to-end. That creates a gap for teams trying to replicate the same behavior elsewhere.

A concrete example is PPR (pre-rendering parts). The static shell of a page (headers, outer layout) can be compiled ahead of time and pushed to the edge so it appears extremely fast, while the rest of the page is streamed later. Achieving that requires more than “run Next.js in a container.” It depends on specific CDN/edge capabilities, React Suspense integration, and—critically—deep coordination between what gets cached at the edge and what gets rendered at origin. In the most advanced setup described, a single request can be handled at the edge to flush the pre-rendered portion, then later the same request reaches origin for the remainder. That kind of single-request edge+origin orchestration is difficult to reproduce with generic infrastructure.

The transcript also highlights why “just use Docker” breaks down as soon as teams rely on Next.js features beyond basic routing and SSR. Next.js has many features and version-dependent behaviors (ISR, image handling, caching rules), so a minimal container approach only works for a subset of use cases. Even the build output can differ: Vercel uses a special, undocumented CLI flag to generate optimized artifacts, meaning self-hosters often can’t reproduce Vercel’s exact build pipeline.

Caching is another recurring “paper cut.” With a single container, in-memory caching is straightforward. With multiple containers, caches must be invalidated and synchronized, which Next.js supports via cache adapter APIs—but not out of the box. That forces operators to decide how to centralize cache state (S3, Redis, in-memory with messaging, etc.) and implement invalidation logic themselves.

The discussion then broadens to open-source and venture funding. Open-source VC-backed efforts are framed as aiming for “venture scale” outcomes—massive adoption that can pivot an entire market—rather than simply building a sustainable product. That mindset can clash with community-first motivations, since the incentives shift toward capturing outsized value if growth succeeds, while failure modes can be harsher than typical open-source projects. The conversation uses examples like Vercel-adjacent open-source influence and compares venture-scale dynamics to power-law outcomes, where a few wins dominate.

Overall, the message is twofold: Next.js self-hosting is genuinely challenging when you want the hard-to-replicate features (edge rendering, caching correctness, build optimizations), and VC/open-source funding introduces incentive tensions that can make some community-driven bets less aligned with long-term “benevolent” stewardship. The speaker also notes that the goal of projects like Open Next is to document and reduce the gap—ideally making the work unnecessary by pushing fixes back into the mainstream ecosystem—while acknowledging the work is largely reverse engineering and community-driven.

Cornell Notes

Next.js is difficult to self-host when teams want the same performance and correctness that Vercel’s platform delivers, because key features depend on specific infrastructure behaviors rather than just framework code. A major example is PPR, where pre-rendered page parts are compiled and served from the edge using React Suspense-compatible orchestration, potentially within a single request. Replicating that requires CDN/edge support, cache-control correctness, and coordination between edge and origin—things a generic Docker setup can’t guarantee. Beyond PPR, self-hosting also runs into build-output differences (Vercel uses an internal CLI flag) and caching/invalidation complexity across multiple containers. The transcript argues that “Docker is enough” only holds for a narrow subset of Next.js usage.

Why does “throw it in a Docker container” fail for many Next.js deployments?

Docker can run the app, but it doesn’t recreate the infrastructure-level guarantees Next.js features may rely on. Next.js can implement solutions either inside the framework or via infrastructure, and Vercel’s hosting platform is built to support the infrastructure path. Features like PPR require specific edge/CDN behavior and tight coordination between what’s cached at the edge and what’s rendered at origin—coordination that generic container + CDN setups often can’t reproduce.

What is PPR, and what makes it hard to replicate outside Vercel?

PPR (pre-rendering parts) compiles static parts of a page (like the header/shell) ahead of time and sends them to the edge so they appear extremely quickly. The rest of the page is handled later, and the transcript ties this to React Suspense behavior: content outside the first Suspense boundary can be cached and served from a CDN, while Suspense-related parts aren’t part of that CDN cache. In the most advanced flow described, the same request can be handled at the edge first, flush the pre-rendered portion, then later hit origin for the remainder—an orchestration that’s not straightforward with standard infrastructure.

How do caching and scaling across multiple containers complicate self-hosting?

With a single container, caching can live in memory and “just work.” With multiple containers, caches must be invalidated consistently; otherwise one container serves stale data after another updates. Next.js provides cache adapter APIs, but the transcript emphasizes that the out-of-the-box solution isn’t complete for multi-container correctness. Operators must choose and implement cache centralization (e.g., S3 or Redis) and invalidation/message propagation themselves.

What build-output difference makes Vercel hard to mimic?

A normal Next.js build produces standard outputs, but Vercel uses a special hidden CLI flag to generate a different output optimized for its platform. The transcript says this flag isn’t documented for public consumption, can change, and breaks compatibility—so self-hosters trying to match Vercel’s behavior may need to reverse engineer or accept differences.

Why do caching headers like stale-while-revalidate matter even with a CDN in front?

The transcript claims Next.js doesn’t output standard cache-control headers in the way some CDNs and caching behaviors expect, specifically mentioning stale-while-revalidates (SWR) header behavior not matching spec. If a CDN implements stale-while-revalidate, but Next.js emits headers that don’t align with the spec, the result can be incorrect behavior even when using a CDN and Docker.

How does the conversation connect open source with venture funding incentives?

Venture-scale investing aims for market-changing adoption, not just a profitable niche. That can create incentive tension for open source: community-first stewardship can be overshadowed by value-capture goals if the project becomes a venture bet. The transcript also notes venture outcomes are power-law driven—few wins can dominate—so failure modes can be harsher than typical open-source projects where “not dominating the market” can still be a success.

Review Questions

  1. Give two specific reasons why a generic Docker deployment may not reproduce Vercel-level Next.js behavior, and explain how each affects correctness or performance.
  2. Describe how PPR relies on edge/CDN orchestration and React Suspense boundaries, and why that makes self-hosting more than a packaging problem.
  3. What does “venture scale” mean in the transcript, and how can it conflict with community-driven open-source motivations?

Key Points

  1. 1

    Next.js self-hosting is difficult when features depend on infrastructure behaviors (edge/CDN orchestration and caching semantics), not just framework code.

  2. 2

    PPR requires specific edge/CDN support and React Suspense-compatible coordination between cached static shell content and later-rendered parts.

  3. 3

    “Docker is enough” typically only works for a narrow subset of Next.js usage that avoids the hardest-to-replicate features.

  4. 4

    Vercel uses an internal, undocumented CLI flag to produce optimized build outputs, making exact replication outside Vercel harder.

  5. 5

    Multi-container deployments introduce cache invalidation and synchronization problems that Next.js supports via adapters but doesn’t fully solve out of the box.

  6. 6

    Header/caching-spec mismatches (including stale-while-revalidate behavior) can cause incorrect CDN behavior even when a CDN sits in front of the app.

  7. 7

    Venture-scale funding pushes for market pivot-level adoption, which can create incentive tensions with open-source community-first goals.

Highlights

PPR isn’t just a framework toggle—it depends on edge/CDN orchestration and React Suspense boundaries, so generic container hosting can’t guarantee the same behavior.
Next.js caching becomes a systems problem at scale: multiple containers require centralized cache state and invalidation logic, often implemented by operators rather than provided automatically.
Vercel’s build pipeline uses a hidden CLI flag to generate optimized outputs, and that undocumented difference is a major reason “self-hosting” isn’t a simple replication task.

Topics

  • Next.js Self Hosting
  • Edge Rendering
  • PPR
  • Caching Invalidation
  • Venture Scale Open Source