Get AI summaries of any video or article — Sign up free
Did Zig Fix Async / Await? thumbnail

Did Zig Fix Async / Await?

The PrimeTime·
5 min read

Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Zig’s async/await direction treats async as part of an explicit IO interface to avoid hidden allocations and runtime surprises.

Briefing

Zig’s async/await push is built around a key constraint: Zig has no hidden allocations, so “await” can’t quietly create promises, heap stacks, or other runtime machinery the way higher-level languages often do. The emerging solution treats async as part of an explicit IO interface—so the caller supplies both an allocator and an IO implementation, and the async machinery can be tailored without introducing surprise memory costs. The result is a design that aims to keep Zig’s low-level guarantees while still enabling structured concurrency features like cancellation and “select”-style racing.

At the center of the approach is an IO abstraction that represents anything that can block the current thread: not just network and file operations, but also potentially long CPU work. Instead of “awaiting” a generic promise object, Zig code works with an IO object whose methods return a cancellable result (with cancellation represented as an error-like outcome). The design also borrows a Go-like flavor: there’s an “async without await” concept (described as “crippled async”) and explicit cancellation, both intended to be composable rather than magical.

A major theme is that the caller chooses how async executes. In practice, that means selecting an IO implementation—such as one backed by a thread pool or one driven by an event loop—at program startup (alongside choosing an allocator). This “bring your own IO” model lets the same Zig library run across different environments, including custom operating systems, because the library depends on the IO interface rather than a specific OS mechanism. The speaker frames this as analogous to how allocator injection works: the library encodes intent and concurrency structure, while the IO implementation decides the execution strategy.

The transcript also highlights how “hot vs cold” execution maps onto Zig’s async model. Operations that start immediately are treated as “hot/eager,” while others remain “cold/lazy” until the chosen IO implementation triggers them—giving the caller control over when work begins. That control is crucial for predictable resource usage and for enabling parallelism patterns where computations can be launched early and joined later.

Cancellation and select are presented as userland-friendly features enabled by the interface-based design. Cancellation is managed with scoped cleanup (using deferred cancellation so losing a race cancels the other branches). Select is described as “first future to finish wins,” effectively racing multiple async operations while still ensuring resources are cleaned up.

Under the hood, the transcript argues that Zig’s “single compilation unit” strategy is what makes this feasible for interface-driven async. The compiler needs to know the closed set of possible function targets behind function pointers (via restricted function pointer types) to compute upper bounds on stack usage. Without that, parallel execution would require unsafe or overly conservative heap allocation of stacks.

Finally, the discussion acknowledges real-world migration pain: updating Zig’s standard IO reader/writer to fit the new async8 pattern is expected to cause major breakage and require rewriting code that touches those interfaces. Still, the direction is framed as a long-term fix—one that keeps Zig’s explicit memory model while making async/await practical, modular, and portable across execution backends.

Cornell Notes

Zig’s async/await work is being reshaped around an explicit IO interface so async operations don’t hide allocations or runtime machinery. Code chooses an allocator and an IO implementation up front (thread pool, event loop, etc.), and async behavior adapts to that choice. Operations return cancellable results, and structured features like cancellation and select/racing can be implemented cleanly using userland patterns. The design relies on Zig’s single compilation unit and restricted function pointer types so the compiler can bound stack usage for parallel execution. The tradeoff is significant migration effort, especially around updating standard IO reader/writer to the new async8 pattern.

Why is Zig’s “no hidden allocations” a problem for traditional async/await designs?

In many languages, “await” implicitly creates runtime objects (e.g., promises/futures) and often allocates memory for bookkeeping and stacks. Zig’s model requires that allocations and deallocations be explicit and visible, so an await mechanism can’t freely allocate behind the scenes. The proposed direction makes async part of an explicit IO interface, letting the chosen IO implementation manage execution and any required resources without surprising allocations.

How does the IO interface change who controls execution strategy?

Instead of hardwiring async semantics to a single runtime, Zig code selects an IO implementation at startup. That implementation could be backed by a thread pool or by an event loop. Because libraries depend on the IO interface (not a specific OS/event system), the same async-capable Zig package can run on different targets—including custom operating systems—so long as that target provides an IO implementation.

What does “cancellation” look like in this model?

Async operations return a cancellable result type, where cancellation is represented like an error outcome. In the select/race example, multiple async operations start, and deferred cancellation ensures that when the function returns (e.g., after the winning branch completes), the losing branches are canceled and their resources cleaned up. This keeps cancellation structured rather than ad-hoc.

What is the “hot vs cold” idea, and why does it matter for async?

“Hot/eager” operations begin executing immediately when constructed/called, while “cold/lazy” operations don’t run until something triggers them (analogous to subscribing to observables in the JavaScript world). In Zig’s approach, this distinction helps encode when work should start, letting the caller and the selected IO implementation decide execution timing—important for controlling resource usage and parallelism.

Why does single compilation unit matter for async with interfaces and function pointers?

Parallel async execution needs safe upper bounds on stack usage for the functions that may run concurrently. When async relies on interfaces and function pointers (via vtables or runtime dispatch), the compiler must know which concrete functions a function pointer can target. With a single compilation unit plus restricted function pointer types, Zig can compute an upper bound stack usage across the limited set of possible targets, enabling pre-allocation of the needed stacks for parallel execution.

What migration pain is expected as async8 reshapes standard IO reader/writer?

The transcript warns that changing Zig’s standard IO reader and writer to fit the async8 pattern is “involved” and likely forces major breakage. Any code that touches those interfaces may need rewriting, because the new interfaces and patterns change how async execution and IO backends integrate.

Review Questions

  1. How does injecting an IO implementation (and allocator) at startup help Zig keep async behavior portable without hidden allocations?
  2. What compiler requirement is described for safely bounding stack usage in parallel async execution, and how do restricted function pointer types support it?
  3. In the select/race pattern, how does deferred cancellation ensure resources from losing branches don’t leak or keep running?

Key Points

  1. 1

    Zig’s async/await direction treats async as part of an explicit IO interface to avoid hidden allocations and runtime surprises.

  2. 2

    Callers choose both an allocator and an IO implementation (thread pool vs event loop), so execution strategy is configurable per environment.

  3. 3

    Async operations return cancellable results, making cancellation a first-class outcome rather than an external side effect.

  4. 4

    Structured concurrency features like select/racing and cancellation cleanup are designed to be implementable in userland using the IO abstraction.

  5. 5

    The “hot vs cold” distinction maps to whether operations start eagerly or only when triggered, giving control over when work begins.

  6. 6

    Single compilation unit plus restricted function pointer types enable the compiler to compute upper bounds on stack usage for parallel async execution.

  7. 7

    Updating standard IO reader/writer to the new async8 pattern is expected to cause significant breaking changes and require code rewrites.

Highlights

Async/await is being reframed around an IO interface so Zig can keep its “no hidden allocations” promise while still supporting structured async features.
Select is described as a race: the first completed async branch wins, while deferred cancellation cleans up the losing branches.
Single compilation unit and restricted function pointer types are presented as the mechanism that lets Zig bound stack usage for interface-driven parallel execution.

Topics

  • Zig Async
  • Async8 IO Interface
  • Cancellation
  • Select Racing
  • Stack Usage Bounds

Mentioned

  • Andrew Kelly
  • IO