Get AI summaries of any video or article — Sign up free
Brain Criticality - Optimizing Neural Computations thumbnail

Brain Criticality - Optimizing Neural Computations

Artem Kirsanov·
6 min read

Based on Artem Kirsanov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Second-order phase transitions produce a continuous change in an order parameter while creating a critical point with distinctive long-range correlations and scale-free behavior.

Briefing

The core claim behind “brain criticality” is that neural networks operate near a second-order phase transition—an edge-of-instability regime where activity becomes scale-free and correlations spread unusually far. That matters because the same critical tuning is predicted to maximize how effectively the brain transmits information, balancing responsiveness to inputs against runaway excitation.

The explanation starts with phase transitions in physics. In everyday first-order transitions like boiling, a system jumps between phases with a discontinuous change in an order parameter. Second-order transitions behave differently: the order parameter changes continuously, yet the system develops a special “critical point” where intermediate behavior emerges. At that point, latent heat disappears (in the water example, the liquid–gas boundary becomes a supercritical fluid), and the system shows properties that don’t depend on a single characteristic scale.

To make the idea concrete, the transcript uses the Ising model, a lattice of spins that can be +1 or −1. Neighboring spins prefer to align (captured by an interaction energy proportional to the negative product of spins), while temperature adds stochastic fluctuations via the Boltzmann distribution. At low temperature, alignment dominates and magnetization is high; at high temperature, randomness dominates and magnetization collapses. Near the critical temperature, neither tendency wins outright. The result is a peak in dynamic correlation: spins fluctuate in coordinated ways, and the correlation length grows sharply, meaning distant parts of the system move together more than they do away from criticality.

Criticality also produces scale-free structure. Snapshots of the Ising lattice look statistically similar across zoom levels, and cluster-size statistics follow a power law rather than an exponential. In the transcript’s example, the probability of observing a cluster of size x versus 2x changes by a constant factor independent of x, which is the hallmark of scale invariance. In log-log coordinates, power laws appear as straight lines, and many critical observables share this kind of scaling.

The neuroscience bridge comes from experiments on “neuronal avalanches.” In 2003, John Beggs and Dietmar Plenz reported that spontaneous activity in rat somatosensory cortex neuron cultures—recorded with an 8 by 8 electrode grid—appears in cascades separated by quiescent periods. When avalanches are defined by duration and size (number of electrodes crossing a threshold, sometimes including amplitude), their distributions follow power laws, implying no characteristic scale and suggesting proximity to a second-order phase transition. Similar power-law avalanche behavior has since been reported across species and recording modalities, from single neurons to EEG.

To map physics concepts onto neural dynamics, the transcript reframes the system as a branching process: each active neuron probabilistically activates downstream neurons, plus a small chance of spontaneous activation. A single control parameter, the branching ratio σ (the sum of outgoing transmission probabilities, equal to the average number of descendants per active ancestor), determines the regime. For σ < 1, activity dies out; for σ > 1, activity amplifies and can resemble epileptiform runaway; at σ = 1, activity neither decays nor explodes on average, yet avalanche sizes and durations remain power-law distributed. In real brains, σ is shaped by the balance of excitation and inhibition, and pharmacologically blocking inhibition disrupts the normal power-law pattern toward supercritical behavior, while blocking excitation pushes toward subcritical dynamics.

Finally, the transcript argues that critical tuning improves information transmission. In a simplified “guessing game,” outputs are uninformative when activity dies out (subcritical) and uninformative when activity saturates (supercritical). At σ = 1, output activity most reliably reflects input activity, producing a peak in an information-transmission measure analogous to the correlation peak at the Ising critical temperature. The overall picture is that hovering near criticality may be an evolved strategy to maximize computational capability while avoiding both silence and runaway excitation.

Cornell Notes

Neural criticality proposes that brain networks operate near a second-order phase transition, where activity becomes scale-free and correlations spread over long distances. Physics intuition comes from the Ising model: temperature drives a continuous transition from ordered alignment to disordered randomness, with a critical point where dynamic correlation peaks and correlation length grows. In neuroscience, “neuronal avalanches” show power-law distributions in size and duration, consistent with scale invariance and proximity to a critical regime. A branching model captures the same idea using a control parameter—the branching ratio σ—where σ < 1 leads to dying activity, σ > 1 leads to runaway amplification, and σ = 1 yields power-law avalanches and maximal information transmission. The balance of excitation and inhibition is presented as the mechanism that tunes σ toward this critical point.

What distinguishes a second-order (continuous) phase transition from a first-order (discontinuous) one, and why does the “critical point” matter?

First-order transitions show a discontinuous jump in an order parameter and often involve latent energy (e.g., boiling at 100°C where temperature stays constant while energy breaks molecular bonds). Second-order transitions change the order parameter continuously, but the system develops a special critical point where intermediate behavior emerges. At that point, boundaries between phases blur and new properties appear—most notably long-range correlations and scale-free structure.

How does the Ising model generate long-distance communication from only local interactions?

The model uses nearest-neighbor coupling that encourages alignment, while temperature introduces stochastic spin flips. At low temperature, spins stay near their average values so correlations are small. At high temperature, spins fluctuate randomly so correlations also average out. At the critical temperature, thermal fluctuations and neighbor alignment balance, producing coordinated fluctuations. Correlation length peaks: the dynamic correlation between spins decays much more slowly with distance, extending far across the lattice.

Why are power laws treated as evidence of scale invariance rather than just another statistical pattern?

Scale invariance means cluster statistics look similar at different zoom levels. In the transcript’s cluster-size thought experiment, the probability ratio P(2x)/P(x) stays constant regardless of x. That property leads to a power-law form f(x) = A·x^(-γ). By contrast, exponential functions change shape when plotted at different scales, so they don’t preserve the same constant ratio across zoom levels. Log-log plots make power laws appear as straight lines, with slope tied to the exponent γ.

What experimental signature links neural activity to criticality?

Beggs and Plenz (2003) reported “neuronal avalanches” in rat somatosensory cortex neuron cultures recorded on an 8 by 8 electrode grid. Large threshold-crossing events appear as cascades separated by quiescence. When avalanches are characterized by duration and size (number of electrodes active in a cascade, sometimes including amplitude), their distributions follow power laws—appearing as straight lines on log-log plots—suggesting the network lacks a characteristic scale and behaves like a system near a second-order transition.

How does the branching model translate criticality into a single tunable parameter?

The branching model treats activity as a stochastic cascade: an active neuron transmits activation to downstream neurons with certain probabilities, plus a small spontaneous activation rate. The branching ratio σ is defined as the sum of outgoing transmission probabilities for each neuron, which equals the average number of descendants activated by one active ancestor. σ controls the phase: σ < 1 yields subcritical decay (activity dies out), σ > 1 yields supercritical amplification (activity grows), and σ = 1 yields critical dynamics where avalanches persist with power-law size and duration.

Why would operating near σ = 1 improve information processing?

In a simplified “guessing game,” the system must infer the number of active input units from output activity. Subcritical dynamics erase information because activity dies before reaching the output. Supercritical dynamics also reduce information because output saturates—weak inputs still produce strong activation. At σ = 1, output activity most often resembles input activity in terms of active-unit counts, so uncertainty about the input is reduced. The transcript links this to a sharp peak in an information-transmission measure at the critical branching ratio.

Review Questions

  1. How do correlation length and dynamic correlation behave as temperature (or σ) approaches the critical point, and what mechanism produces that peak?
  2. What specific features of neuronal avalanches (as defined in the transcript) support the claim of scale invariance?
  3. In the branching model, what changes when σ moves from below 1 to above 1, and how does that relate to excitation–inhibition balance?

Key Points

  1. 1

    Second-order phase transitions produce a continuous change in an order parameter while creating a critical point with distinctive long-range correlations and scale-free behavior.

  2. 2

    In the Ising model, temperature drives a shift from ordered alignment to disordered randomness, with dynamic correlation peaking at the critical temperature and correlation length growing sharply.

  3. 3

    Power-law cluster-size distributions signal scale invariance: probability ratios like P(2x)/P(x) remain constant across scales, unlike exponential distributions.

  4. 4

    Neuronal avalanches—cascades of threshold-crossing activity separated by quiescence—show power-law distributions in size and duration, consistent with brain networks operating near a second-order transition.

  5. 5

    A branching model captures neural criticality using the branching ratio σ (average descendants per active neuron): σ < 1 leads to decay, σ > 1 leads to runaway amplification, and σ = 1 yields critical avalanches.

  6. 6

    Excitation–inhibition balance is presented as the biological mechanism that tunes σ; blocking inhibition shifts dynamics toward supercritical behavior, while blocking excitation shifts toward subcritical behavior.

  7. 7

    At criticality, information transmission is predicted to be optimized because outputs neither vanish (subcritical) nor saturate (supercritical), making output activity more informative about inputs.

Highlights

At the critical point, dynamic correlation peaks because coordinated fluctuations emerge when neighbor interactions balance thermal stochasticity.
Cluster sizes at criticality follow power laws, reflecting scale invariance: zooming in or out preserves the statistical structure.
Neuronal avalanches in rat cortical neuron cultures exhibit power-law distributions in avalanche size and duration, echoing the scale-free signatures of second-order transitions.
In the branching model, σ = 1 is the tipping point where activity neither dies out nor explodes, yet avalanche statistics remain power-law distributed.
Information transmission is maximized at criticality because output activity is informative about input activity without saturating or disappearing.

Topics

  • Criticality
  • Second-Order Phase Transitions
  • Ising Model
  • Neuronal Avalanches
  • Branching Processes

Mentioned

  • John Beggs
  • Dietmar Plenz