Get AI summaries of any video or article — Sign up free
Mamba sequence model - part 1 thumbnail

Mamba sequence model - part 1

5 min read

Based on West Coast Machine Learning's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Mamba targets the gap between efficient long-context models and Transformer-like content-based reasoning by making SSM parameters depend on the current input token.

Briefing

Mamba’s core pitch is that sequence models can match Transformer-quality results on language and other modalities while scaling linearly with sequence length—by replacing attention with “selective” structured state space models (SSMs). The central weakness targeted is content-based reasoning: many efficient SSM-style architectures struggle to change behavior based on which token appears, so they can’t reliably route information the way attention does. Mamba addresses this by making key SSM parameters depend on the current input token, enabling selective propagation or forgetting of information as the sequence unfolds.

The discussion anchors on the paper “Linear Time Sequence Modeling with Selective State Spaces” by Albert Gu and Tri Dao, which frames Transformers as the dominant architecture but highlights their quadratic cost on long contexts. Alternatives—sparse attention, linear attention variants, gated convolutions, recurrent models, and structured SSMs—aim to reduce compute, yet often underperform on important discrete modalities like language. The paper argues that the missing ingredient is a mechanism for content-based reasoning, especially in discrete token streams.

A key technical storyline runs through SSMs like S4 (Structured State Spaces), which build on earlier control-theory-inspired models. In continuous time, an SSM describes how a hidden state evolves under linear dynamics and produces outputs via linear projections. To use it for sequences, the model is discretized into an RNN-like recurrence and—crucially—can also be viewed as a convolution over the entire input sequence. That convolutional view enables efficient parallel training, while the recurrent view supports fast step-by-step inference.

However, naive SSM implementations run into two practical barriers: (1) if the state-transition matrix is randomly initialized, the model performs poorly on long-range benchmarks, and (2) computing the convolution filter directly is expensive because it involves repeated powers of the transition matrix, creating both numerical and runtime bottlenecks. S4’s solution is to hard-code a special structured form for the transition matrix (derived from “HiPPO” memory ideas using Legendre polynomial projections). This structure both stabilizes long-range behavior and enables fast algorithms for computing the convolution filter without the prohibitive cost.

Mamba builds on this foundation but changes the rules: it introduces selectivity so that SSM parameters become input-dependent, restoring the ability to react to content. The result is a model that can be implemented efficiently (including GPU-aware parallel algorithms) and that scales linearly in sequence length. Reported outcomes include strong performance on long-context benchmarks and cross-modality results; notably, a 2.8B-parameter Mamba model outperforms same-size Transformers and matches Transformers trained at roughly twice the parameter count, using training recipes considered best practice for Transformers. The broader takeaway is that structured state spaces—once made selective and hardware-efficient—can become a serious alternative to attention for long sequences, and the field is now watching whether scaling Mamba beyond current experimental sizes will keep closing the gap.

Cornell Notes

Mamba’s approach to long-sequence modeling replaces attention with selective structured state space models (SSMs). The motivation is that many efficient SSMs lack content-based reasoning—behavior that changes depending on which token appears—so they underperform on language. The background centers on S4, which makes SSMs practical by using a special HiPPO/Legendre-based structure for the state-transition matrix, stabilizing long-range dependencies and enabling fast computation of the equivalent convolution filter. Mamba then adds “selectivity” by making important SSM parameters depend on the current input token, restoring token-conditioned behavior while keeping linear-time scaling. This combination aims to deliver Transformer-like quality with much better long-context efficiency.

Why do Transformers struggle with long sequences, and what alternatives try to fix that?

Full self-attention scales quadratically with sequence length because each token attends to all others. That makes very long contexts expensive in both memory and compute. The transcript lists attempts to reduce cost: sparse attention, linear attention variants, gated convolutions, recurrent models, and structured state space models (SSMs). The recurring issue is that many sub-quadratic alternatives still don’t match attention on key tasks in discrete modalities like language.

What is the “content-based reasoning” weakness attributed to many SSM-style models?

Content-based reasoning means the model’s behavior should depend on which token (or word) it sees. For example, seeing “Harry” should lead to different next-token predictions than seeing a different word, like “Truman,” even if the surrounding context is similar. The transcript frames the weakness as an inability to selectively route or update information based on the current token in discrete streams.

How does S4 make state space models trainable and efficient for long contexts?

S4 addresses two problems: (1) random initialization of the state-transition matrix A leads to poor long-range performance, and (2) computing the convolutional form of the SSM filter is expensive because it involves powers of A. S4 hard-codes A using a HiPPO-derived structured form based on Legendre polynomial projections, which both stabilizes long-range memory and enables fast algorithms to compute the equivalent convolution filter without the naive cost.

Why can an SSM be viewed as both an RNN and a convolution?

The transcript explains three equivalent representations: continuous-time dynamics, a discretized recurrence (RNN-like), and a convolutional view. Discretization turns the differential equation into a linear recurrence that updates a hidden state step-by-step (good for inference). Unrolling the recurrence yields a closed-form expression where the output sequence equals a convolution of the input with a parameterized filter K (good for parallel training).

What changes in Mamba relative to S4?

Mamba keeps the structured SSM backbone but introduces selectivity: key SSM parameters become functions of the current input token. This restores token-conditioned behavior (content-based reasoning) that plain SSMs often lack. The transcript also emphasizes hardware-aware parallel algorithms so the selective SSM can run efficiently, achieving fast inference and linear scaling with sequence length.

What performance claim is highlighted for Mamba at moderate scale?

The transcript cites that a 2.8B-parameter Mamba model outperforms Transformers of the same size and matches Transformers trained with about twice the parameters, both in pre-training and downstream evaluation. It also notes that Mamba’s reported results extend across modalities like language, audio, and genomics, with strong performance on long-context benchmarks.

Review Questions

  1. How do the three representations of an SSM (continuous-time, recurrent/discretized, and convolutional) relate to training vs inference efficiency?
  2. What specific role does the HiPPO/Legendre-based structure of the transition matrix play in S4’s ability to handle long-range dependencies?
  3. Why does making SSM parameters input-dependent (selectivity) help with content-based reasoning in discrete modalities like language?

Key Points

  1. 1

    Mamba targets the gap between efficient long-context models and Transformer-like content-based reasoning by making SSM parameters depend on the current input token.

  2. 2

    Structured state space models can be interpreted as RNNs (for stepwise inference) and as convolutions (for parallel training), after discretization/unrolling.

  3. 3

    S4’s practical success comes from hard-coding a special structured transition matrix (HiPPO/Legendre-based) to stabilize long-range behavior and avoid expensive naive convolution computation.

  4. 4

    Naively computing the convolution filter for an SSM is costly because it involves repeated powers of the transition matrix, creating runtime and numerical issues.

  5. 5

    Mamba builds on S4 but adds “selective” parameterization plus hardware-aware parallel algorithms to keep linear-time scaling while improving language performance.

  6. 6

    Reported results emphasize that a 2.8B-parameter Mamba model can beat same-size Transformers and match roughly 2×-size Transformers using strong Transformer training recipes.

Highlights

Mamba’s selectivity is framed as the missing mechanism for content-based reasoning in efficient SSMs—behavior changes based on which token appears.
S4 turns an SSM into something trainable at scale by using a HiPPO/Legendre-structured transition matrix instead of random A initialization.
An SSM can be trained via its convolutional form and run via its recurrent form, aligning parallel training with fast autoregressive inference.
The transcript highlights a performance headline: 2.8B Mamba beating same-size Transformers and matching ~2×-size Transformers on pre-training and downstream tasks.

Topics

  • Selective State Spaces
  • Structured State Space Models
  • S4 and HiPPO
  • Long-Range Sequence Modeling
  • Linear-Time Inference

Mentioned

  • Albert Gu
  • Tri Dao
  • Chris Ray
  • Sasha Rush
  • Sid Caram Chetti
  • Yann LeCun
  • SSM
  • LLM
  • RNN
  • GPU
  • MLP
  • NLP
  • SS4
  • HiPPO
  • FFT
  • CUDA