Get AI summaries of any video or article — Sign up free
Neural manifolds - The Geometry of Behaviour thumbnail

Neural manifolds - The Geometry of Behaviour

Artem Kirsanov·
5 min read

Based on Artem Kirsanov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Population firing-rate vectors from multi-electrode recordings turn spike trains into trajectories through an n-dimensional activity space.

Briefing

Neural activity across populations of neurons doesn’t wander through a high-dimensional space at random. Instead, the firing-rate patterns trace out a lower-dimensional “neural manifold” whose intrinsic geometry—especially its dimension and topology—can reveal what latent variables a brain circuit is encoding. That matters because it offers a concrete way to move from raw spike trains to testable claims about the variables behind behavior, without needing to guess the encoding scheme from first principles.

The path starts with how population activity is represented. With multi-electrode arrays, researchers can record from up to a few hundred neurons at once, marking when each neuron spikes. By binning time into short intervals and counting spikes per bin (often smoothing the result), each moment in time becomes an n-dimensional vector: one coordinate per neuron’s instantaneous firing rate. As an animal forages or performs a task, that vector changes, producing a trajectory through this n-dimensional “empirical neural activity space.” But physiological constraints—such as limits on maximum firing rates—already rule out arbitrary points, and the fact that neurons interact means the trajectory is confined to a subspace rather than filling the whole ambient space.

The key mathematical idea is that the relevant structure can be captured by manifold geometry. Manifolds are shapes that look locally like lower-dimensional Euclidean space, even if they sit inside a higher-dimensional ambient space. Algebraic topology formalizes which global properties survive continuous deformation: two shapes can be homeomorphic (deformable without tearing or gluing), yet still differ in crucial ways—like whether a surface has a hole. The discussion emphasizes two invariants that are especially useful for neural data: intrinsic dimension (the number of independent degrees of freedom needed to specify a point on the manifold) and genus/topology (how many “holes” the manifold has).

Dimension helps translate geometry into coding capacity. If a neural population’s activity lives on a two-dimensional manifold, it can’t independently represent four independent variables (for example, x and y position plus two independent velocity components) because the dynamics would require at least four degrees of freedom. Topology adds a second constraint: holes correspond to variables with circular or constrained structure. A torus and a sphere can look locally identical from the surface, yet differ globally because one contains a “hole” that prevents certain deformations.

A concrete example comes from head-direction cells in the mouse. Recording from neurons in the thalamus that participate in the head-direction system, researchers obtain a cloud of points in high-dimensional firing-rate space and reconstruct its shape under the hypothesis that the circuit encodes head angle. If the circuit truly represents only facing direction, the manifold should be one-dimensional (one variable) and have a hole consistent with angular wraparound. The analysis finds activity localized to a one-dimensional ring. Moreover, equal arc lengths along the ring correspond to equal differences in facing angle, yielding a direct one-to-one mapping between manifold position and behavioral direction.

The broader takeaway is that topological data analysis and computational neuroscience are beginning to converge. By treating neural population activity as geometric data—recovering intrinsic dimension and topology—researchers can infer what latent variables are being represented in circuits involved in navigation, movement, spatial mapping, and potentially higher-level abstractions. The approach reframes “what the brain encodes” as a question about the geometry of population dynamics.

Cornell Notes

Neural population firing patterns form trajectories in a high-dimensional space, but those trajectories typically lie on lower-dimensional manifolds. The manifold’s intrinsic dimension indicates how many independent degrees of freedom the circuit can represent, while its topology (notably the number of holes) constrains the structure of encoded variables. Using multi-electrode recordings, researchers convert spike trains into time-varying vectors of firing rates and reconstruct the geometry of the resulting point cloud. In head-direction cells, the reconstructed manifold is a one-dimensional ring with a hole, matching the circular nature of angular head orientation. This geometric lens links population activity to latent variables in a way that can be tested directly from data.

How does spike-train data become a point cloud in high-dimensional space?

With multi-electrode arrays, researchers record spikes from n neurons simultaneously. Time is partitioned into short bins, and for each bin the spike count is converted into an instantaneous firing-rate estimate (often smoothed to remove discrete jumps). At each time point, the population activity is represented as an n-dimensional vector—one coordinate per neuron’s firing rate—so the evolving activity traces a trajectory through n-dimensional space. Over time, the collection of these vectors forms a cloud/trajectory of points in that high-dimensional activity space.

Why can’t the neural trajectory fill the entire n-dimensional activity space?

Even ignoring interactions, neurons have physiological limits that cap firing rates (for example, rates above roughly 500 spikes per second are impossible). More importantly, neurons influence each other through network connectivity, so their firing rates cannot vary independently. Those constraints confine the trajectory to a subspace—often well described as a lower-dimensional manifold embedded in the larger ambient space.

What makes a manifold useful for interpreting neural population activity?

A manifold is locally Euclidean: near any point, it resembles a lower-dimensional flat space even if it curves globally. That matches the idea that neural activity can vary along a small number of independent degrees of freedom while still producing complex global structure. By reconstructing the manifold from data, researchers can infer intrinsic dimension (degrees of freedom) and topology (global features like holes) that persist under continuous deformation.

How do intrinsic dimension and topology translate into statements about encoded variables?

Intrinsic dimension constrains the number of independent variables the circuit can represent. If activity is confined to a two-dimensional manifold, it cannot encode four independent parameters independently (because that would require at least four degrees of freedom). Topology constrains variable structure: a hole corresponds to circular or constrained variables where wraparound prevents certain deformations. For example, angular variables naturally produce ring-like manifolds with a hole.

Why does head-direction coding produce a one-dimensional ring?

Head direction is an angle, which is circular: rotating by 360° returns to the same state. Under the hypothesis that the recorded neurons encode only facing direction, the manifold should have one degree of freedom (intrinsic dimension 1) and a hole reflecting angular wraparound. The reconstructed activity manifold from head-direction cells is found to be a one-dimensional ring, and equal distances along the ring correspond to equal differences in facing angle.

Review Questions

  1. If a neural population’s activity is confined to a manifold of intrinsic dimension k, what kinds of sets of independent variables could it represent, and what would be impossible?
  2. Explain how topology (holes) can distinguish a sphere from a torus even when they look locally identical.
  3. In the head-direction example, what specific geometric features of the reconstructed manifold support the claim that the circuit encodes an angular variable?

Key Points

  1. 1

    Population firing-rate vectors from multi-electrode recordings turn spike trains into trajectories through an n-dimensional activity space.

  2. 2

    Physiological firing-rate limits and network interactions confine neural trajectories to lower-dimensional structures rather than arbitrary points.

  3. 3

    Manifolds provide a framework for describing these structures as locally Euclidean shapes embedded in higher-dimensional spaces.

  4. 4

    Intrinsic dimension reflects the number of independent degrees of freedom a circuit can represent, constraining which variables can be encoded independently.

  5. 5

    Topology captures global features that survive continuous deformation; holes correspond to constrained or circular variable structure.

  6. 6

    Head-direction cells yield a one-dimensional ring manifold, matching the circular nature of angular head orientation and enabling a direct mapping from manifold position to facing direction.

Highlights

Neural population activity often traces a low-dimensional manifold inside a much higher-dimensional firing-rate space, turning “chaotic spikes” into geometric structure.
Intrinsic dimension can be used as a proxy for the number of independent variables a circuit can encode.
Topology matters because holes distinguish global structure even when local geometry looks the same—exactly what angular variables require.
Head-direction coding produces a one-dimensional ring: equal arc lengths correspond to equal angular differences, linking geometry to behavior.

Topics

  • Neural Manifolds
  • Topological Data Analysis
  • Population Coding
  • Intrinsic Dimension
  • Head Direction Cells