Get AI summaries of any video or article — Sign up free
Why is Next.js so slow?? thumbnail

Why is Next.js so slow??

Theo - t3․gg·
6 min read

Based on Theo - t3․gg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Server component apps feel slow when navigation waits for server-generated content without an immediate UI acknowledgment of the click.

Briefing

Next.js “server component” apps feel slow when navigation waits on server work without giving users immediate feedback. The core issue isn’t that server components are inherently sluggish; it’s that teams often treat server-rendered navigation like client-side data fetching—showing nothing (or relying on browser loading) until the server finishes generating the next route. That creates a long “time to first bite” window where users click and nothing visibly happens, which the brain reads as unresponsiveness.

To explain why, the transcript walks through how web performance expectations formed. Traditional multi-page apps (MPAs) load new HTML from the server on every navigation. That can be fast to complete, but users typically see no meaningful change until the full response arrives. Single-page apps (SPAs) flip the model: the browser loads one HTML shell and JavaScript once, then client-side routing updates the UI instantly on clicks. SPAs often take longer overall to show fully correct content because data still comes from APIs, but the interaction feels immediate because the app can react within milliseconds.

Server components introduce a hybrid: route-specific HTML can be generated on the server, with caching for static cases, and streaming for dynamic ones. That means the URL again matters—unlike SPAs where the same shell HTML serves every route. But the hybrid behavior only delivers SPA-like “instant navigation” if the build and runtime can produce a static shell (or partial pre-render) and then fill in dynamic parts later. Without the right caching layer (e.g., a CDN smart enough to serve cached route output and fall back to the origin), server component apps can behave like “server-first” pages: first loads may be acceptable, but navigations can require a round trip before anything changes.

The transcript then demonstrates the failure mode with a mock dashboard. Some routes appear to navigate instantly because they’re effectively static and already present in the client bundle. Another route is forced dynamic, and navigation to it stalls: the UI doesn’t update immediately, so the sidebar highlight waits, producing a dead-feeling gap around ~600ms—enough to cross the threshold where users start to doubt the site is responding. The fix is surprisingly specific to Next.js’s server component rendering pipeline: add a route-level `loading.tsx` (or equivalent Suspense boundary) so Next.js can ship a static “loading shell” immediately, then stream the dynamic content when ready. The key is placing the loading boundary above the dynamic async work so the route can render something right away.

The transcript also highlights gotchas: Suspense placement in layouts may not trigger the desired static fallback because layouts don’t rerun on the client. Moving Suspense to the page component (or using `loading.tsx`, which Next.js wires automatically) restores immediate navigation feedback. It further mentions newer Next.js mechanisms like Partial Pre-Rendering (PPR) and Dynamic IO, which aim to make dynamic routes render static HTML up to the first async boundary, then stream the rest.

Ultimately, the argument is about balancing two user-perceived metrics: “time to first bite” (TTFB) and “full content paint” (FCP). Server components done right can reduce the bad loading window (where users see nothing) while still getting the benefits of server-side data fetching. Done wrong, they collapse TTFB and FCP into the same slow moment—recreating the unpleasant navigation delay seen in older server-rendered patterns. The transcript closes by contrasting this with HTMX: server round trips are normal, but UX improves when the app controls click-local loading states instead of relying on browser loading behavior.

Cornell Notes

Server components in Next.js aren’t automatically slow; they become slow-feeling when navigation waits for server-rendered content without showing an immediate “something happened” state. The transcript contrasts three paradigms: MPAs (no UI change until the full HTML arrives), SPAs (instant client-side navigation feedback, even if data finishes later), and server components (hybrid streaming that can match SPA responsiveness only if the app provides a loading shell early). The practical fix emphasized is adding `loading.tsx` (or a Suspense boundary placed above dynamic async work) so Next.js can pre-render a static fallback and then stream dynamic content. Without that boundary—especially when layouts or async boundaries are placed incorrectly—TTFB and FCP merge, producing a dead-feeling gap around ~600ms or more.

Why do server component apps often feel slower during navigation even when total work might be similar?

The transcript frames the problem as a UX timing gap: users click and nothing visually changes until the server finishes generating the next route. That means the “time to first bite” (TTFB)—the moment the UI acknowledges the click—stretches into the same window as “full content paint” (FCP). In the mock dashboard example, clicking a dynamic route doesn’t update the sidebar highlight immediately; it waits ~576–600ms, which the transcript says is enough to feel unresponsive because users expect feedback in under ~50ms and start to assume failure beyond ~150ms.

How does the transcript connect SPA speed to user perception rather than raw server speed?

SPAs load one HTML/JS shell once, then client-side routing updates the UI immediately on clicks. Even if the correct data arrives later via API calls, the interaction feels instant because the app can render a loading state right away. The transcript emphasizes that this “instant response” matters more to perceived speed than the eventual time when all content is correct.

What role does a CDN play in server component performance?

Server components rely on caching and fallback behavior. The transcript describes a two-step flow: the CDN checks whether a cached version of the route exists; if it does, it serves static output; if not, the request is forwarded to the origin server to generate the page. If server component apps are deployed without a smart CDN layer (e.g., just origin server hosting), navigations can require round trips before anything changes, making them feel like older server-rendered sites.

What is the simplest Next.js-specific fix shown for improving navigation feedback?

Add a `loading.tsx` file for the route (or the relevant layout segment). In the demo, adding `loading.tsx` makes the sidebar highlight update immediately on navigation, even though the dynamic content still arrives later. The loading boundary allows Next.js to ship a static loading shell first, then stream the dynamic children when ready.

Why can Suspense placement in layouts fail to produce the desired immediate behavior?

The transcript notes that layouts don’t rerun on the client for navigation in the way page components do. If Suspense is placed in a layout, Next.js may not generate the correct static fallback for the dynamic child route. Moving the Suspense boundary down into the page component (or using `loading.tsx`, which Next.js automatically wires) restores the immediate navigation behavior.

What do PPR and Dynamic IO aim to improve, and why are they described as “new” or not fully stable?

Partial Pre-Rendering (PPR) and Dynamic IO are presented as mechanisms to generate static HTML up to the first async boundary, then stream dynamic server content afterward. The transcript suggests these features can reduce the bad loading window by ensuring users see something immediately even on dynamic routes. It also cautions that stability and correct modeling depend on other factors like `use cache` and Dynamic IO behavior, and that teams may be hesitant to recommend PPR broadly yet.

Review Questions

  1. In the transcript’s framing, what specific UX failure happens when TTFB and FCP become the same moment, and how does `loading.tsx` prevent that?
  2. Describe how the transcript’s mock dashboard distinguishes between routes that feel instant and routes that feel slow. What determines that difference?
  3. Where should a loading boundary be placed in Next.js server component rendering to ensure a static fallback is generated, and why might placing it in a layout not work?

Key Points

  1. 1

    Server component apps feel slow when navigation waits for server-generated content without an immediate UI acknowledgment of the click.

  2. 2

    SPAs feel fast because client-side routing can update the UI within milliseconds, even if data finishes later.

  3. 3

    Server components only deliver SPA-like responsiveness when the app can render a static loading shell first and stream dynamic content afterward.

  4. 4

    A smart CDN layer matters: cached route output should be served when available, with fallback to origin generation when not.

  5. 5

    In Next.js, adding a route-level `loading.tsx` (or a Suspense boundary above dynamic async work) is the most direct way to restore immediate navigation feedback.

  6. 6

    Suspense boundaries placed in layouts can fail because layouts don’t rerun on the client the way page components do.

  7. 7

    The performance goal is to shorten the “bad” loading window (TTFB) while allowing full content (FCP) to arrive later via streaming.

Highlights

The transcript’s central claim is that the “slow” feeling comes from missing immediate feedback on navigation, not from server components being inherently incapable.
Adding `loading.tsx` makes the sidebar highlight update instantly on navigation, turning a dead-feeling ~600ms pause into a responsive interaction.
Server components are described as a hybrid: they can behave like SPAs for navigation feedback and like MPAs for server-side data fetching—if loading boundaries are placed correctly.
PPR and Dynamic IO aim to pre-render static HTML up to the first async boundary, then stream the rest to reduce the time users see nothing.

Topics

  • Server Components
  • Next.js Loading States
  • TTFB vs FCP
  • Client vs Server Rendering
  • CDN Caching

Mentioned

  • RSC
  • SPA
  • MPA
  • TTFB
  • FCP
  • PPR