Get AI summaries of any video or article — Sign up free
Walking Away From JavaScript thumbnail

Walking Away From JavaScript

The PrimeTime·
5 min read

Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Creeping memory growth in Next.js/JavaScript deployments can reach hundreds of megabytes, even for simple pages and modest traffic.

Briefing

JavaScript-heavy Next.js apps can quietly accumulate massive memory footprints over time, and that creeping usage is pushing one developer toward a “do more with less” philosophy—fewer dependencies, less code, and sometimes a different language entirely. The trigger was practical: multiple Next.js deployments showed memory climbing into the hundreds of megabytes (around 400MB in one case), even when traffic was modest. That pattern raised a broader question about whether modern web stacks are paying too high a cost in runtime overhead for what the apps actually do.

The argument tightens around a simple observation: “less” often buys performance. Netflix once used a similar rule of thumb—stop doing so much to get faster—after repeated memory problems that required removing libraries like core-js. The same theme shows up in the developer’s own experiments with utility libraries: using only a couple functions from a big ecosystem (for example, common debounce/throttle usage) still doesn’t justify pulling in the whole world. The takeaway isn’t anti-JavaScript; it’s anti-unexamined complexity.

To test whether the memory issue is structural, the developer compares runtimes and frameworks. Switching from Next.js to alternatives like Hono and Bun changes the memory profile, but not in a way that fully satisfies the “less” goal. Hono/Bun still climbs over time (starting around tens of megabytes and drifting upward), and the developer suspects garbage collection behavior, leaks, or production overhead. In contrast, a Go implementation of a similar mailing-list endpoint lands dramatically lower—single-digit to low-teens megabytes in early deploys (roughly 6MB rising to about 11MB). That gap becomes the centerpiece: if one service costs ~400MB in JavaScript land, then running dozens of services makes cloud bills rise along a steep curve.

The explanation offered for why Go can be so lean is rooted in runtime mechanics. JavaScript relies on heap allocation and JIT compilation, which requires storing optimization metadata and runtime bookkeeping for functions and objects. Go, by contrast, compiles ahead of time and can use stack allocation for many structs, reducing the need for garbage-collection tracking and extra per-field overhead. The developer also argues that Go’s standard library and built-in concurrency primitives (channels and goroutines) reduce the need for bundlers and middleware ecosystems that Node deployments often pull in.

From there, the pitch expands beyond memory into engineering philosophy. The developer praises Go’s “boring” simplicity—formatting, straightforward tooling, and a single compiled binary—while acknowledging Go has tradeoffs. The broader message is that web development often becomes a tangle of incidental complexity: bundlers, caching rules, unstable framework behaviors, and constant dependency upgrades. That pain fuels interest in Go + HTMX as a way to preserve server-side power while delivering interactive UI with minimal JavaScript. Finally, the transcript argues for long-term maintainability: web components and HTMX-style HTML transformations can reduce upgrade churn compared with frameworks that force frequent rewrites.

Overall, the core finding is a practical one: memory overhead and dependency sprawl aren’t just theoretical—they show up in real deployments, and switching to “less” (including Go in some cases) can produce large, measurable wins in both resource usage and operational simplicity, especially when scaling to many services.

Cornell Notes

Memory usage in Next.js/JavaScript deployments can climb into the hundreds of megabytes over time, even for simple pages and modest traffic. Experiments comparing stacks suggest Go services can run with dramatically lower memory (single-digit to low-teens MB) than JavaScript alternatives that hover much higher (often ~100MB+ and sometimes ~300–400MB). The transcript attributes part of the gap to JavaScript runtime behavior: heap allocation, garbage collection bookkeeping, and JIT optimization metadata. The practical implication is cost and operational scaling: dozens of high-memory services can make cloud spend rise quickly. The speaker’s broader “do more with less” mindset also pushes toward fewer dependencies and less client-side JavaScript, including interest in Go + HTMX to keep interactivity without heavy front-end bloat.

What concrete memory problem triggered the “Walking Away From JavaScript” mindset?

Multiple Next.js applications showed memory usage creeping upward over time. One example cited around 400MB of memory usage, with the speaker noting that image optimization (and possibly caching behavior) seemed involved. Another Next.js starter-kit-style app—described as essentially a single static page—also reached roughly 300–400MB at times. The key pattern was not just high memory at one moment, but gradual growth that made the developer question why a simple page needed that much memory.

How did the transcript compare memory usage across JavaScript runtimes and Go?

The transcript compares Hono + Bun, Node/JavaScript (with V8/JSC behavior), and Go + Fiber. Hono/Bun started around ~44MB on first deploy and then climbed gradually, with some deploys resetting it lower before it rose again. Node/JavaScript was described as hovering higher (e.g., ~105MB in one comparison). The Go + Fiber implementation was presented as far lower: about 6MB on first deploy, rising to around 11MB, even for a simple mailing-list page.

Why does the transcript claim JavaScript uses more memory than Go for similar data structures?

It argues that JavaScript lacks stack allocation for general structs/objects and relies heavily on heap allocation, with garbage-collection metadata and runtime bookkeeping. It also claims JIT compilation requires storing optimization information for functions (described as tens of thousands of optimization entries per function in memory inspection). Go is described as compiling ahead of time and using stack allocation where possible, reducing GC tracking and per-field overhead—so the same logical struct can occupy far less memory.

What does “do more with less” mean in practical engineering terms here?

It means reducing incidental complexity: fewer dependencies, less code, fewer third-party services, and fewer framework-specific moving parts. The transcript links this to performance and cost: if each service consumes hundreds of megabytes, scaling to many services steepens cloud spend. It also frames dependency removal as a performance tactic (e.g., removing core-js after memory problems) and argues that bundling/build tooling can become a code smell when it exists mainly to make a language run outside its intended environment.

How does HTMX fit into the “less JavaScript” direction?

The transcript suggests using Go to generate and serve HTML, then using HTMX to handle interactivity by swapping HTML fragments on user actions—aiming for 90–95% of the interactivity designers/businesses want without pulling in heavy TypeScript/React-style client ecosystems. It also notes HTMX can coexist with React if needed, but the goal is to avoid the bloat and upgrade churn that comes from relying on large front-end frameworks for simple UI behaviors.

What long-term maintainability argument is made about web components and upgrades?

Web components are presented as having a long shelf life: once written, they should keep working across years because they’re self-contained. The transcript contrasts this with framework upgrade pain—especially React/Next.js version churn—where upgrading can require large refactors, merge conflicts, and feature-branch overhead. The underlying claim is that reducing upgrade coupling lowers the long-term cost of maintaining a product.

Review Questions

  1. What specific runtime mechanisms does the transcript use to explain why JavaScript can consume more memory than Go (heap vs stack, GC metadata, JIT optimization data)?
  2. How does the transcript connect per-service memory usage to cloud cost when scaling from one app to many services?
  3. What role does HTMX play in the proposed architecture, and what kind of UI interactivity is it trying to replace or minimize?

Key Points

  1. 1

    Creeping memory growth in Next.js/JavaScript deployments can reach hundreds of megabytes, even for simple pages and modest traffic.

  2. 2

    A “do more with less” mindset—fewer dependencies, less code, less complexity—frames both performance and cost improvements.

  3. 3

    Comparative experiments suggest Go + Fiber can run with dramatically lower memory (roughly ~6MB to ~11MB) than JavaScript stacks that can hover far higher (often ~100MB+ and sometimes ~300–400MB).

  4. 4

    The transcript attributes part of the memory gap to JavaScript heap allocation plus garbage-collection bookkeeping and JIT optimization metadata.

  5. 5

    Go’s ahead-of-time compilation and stack allocation (where applicable) reduce runtime overhead compared with JavaScript’s heap-heavy model.

  6. 6

    HTMX + server-rendered HTML is proposed as a way to deliver most interactivity without heavy client-side JavaScript ecosystems.

  7. 7

    Long-term maintainability is argued through self-contained approaches like web components to reduce upgrade churn from fast-moving framework ecosystems.

Highlights

Memory creep in Next.js apps was observed as climbing into the 300–400MB range, prompting a broader question about whether the stack’s overhead matches the simplicity of the apps being built.
A Go + Fiber mailing-list implementation was reported at about 6MB on first deploy and around 11MB later, contrasted with JavaScript approaches that stayed far higher.
The transcript links JavaScript’s higher memory use to heap allocation, garbage-collection metadata, and JIT optimization data stored for functions.
HTMX is positioned as a practical path to keep interactivity while avoiding the bloat and upgrade churn of large client-side frameworks.

Topics

  • Memory Profiling
  • Go vs JavaScript
  • Next.js Overhead
  • HTMX Architecture
  • Dependency Reduction

Mentioned

  • Joel Spolsky
  • Fred Scott
  • Carson
  • JIT
  • AWS
  • CPU
  • GC
  • V8
  • JSC