Get AI summaries of any video or article — Sign up free
Logarithmic nature of the brain đź’ˇ thumbnail

Logarithmic nature of the brain đź’ˇ

Artem Kirsanov·
5 min read

Based on Artem Kirsanov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Firing rates are described as log-normal rather than Gaussian, implying a skewed distribution with a heavy right tail.

Briefing

Neuronal firing rates don’t cluster like a classic bell curve; they follow a log-normal distribution—an outcome that falls out naturally when brain-relevant quantities arise through multiplicative dynamics rather than additive ones. That distinction matters because log-normality produces a characteristic skew: most neurons fire at modest rates, while a minority fire much faster, forming a heavy-tailed “fast lane” that can dominate information flow and computation.

The argument starts by contrasting the normal distribution’s usual origin—sums of many independent random influences—with what happens when those influences multiply. If a neuron’s firing rate were built from many additive, hidden factors, the central limit theorem would drive the result toward a symmetric Gaussian shape. But firing rates are better modeled by products: take many random variables that each scale a quantity up or down, then multiply them. Taking logarithms turns products into sums, so the log of the firing rate becomes approximately normal. When that normal distribution is mapped back onto a linear firing-rate axis, the result is log-normal: a long right tail and strong asymmetry.

That same skew shows up across multiple neural scales. Roughly speaking, most neurons sit in a slow-spiking regime—around one action potential per second—while a smaller fraction spikes rapidly, with rates reaching about 10 Hz. Synaptic weights follow the same pattern: most synapses are weak, but a subset of strong synapses forms reliable “highways” for signal transmission. High firing-rate neurons tend to connect to each other more strongly, and they also tend to grow thicker axons, improving conduction speed through better insulation. Together, these features create a fast-spiking minority network—often described as a “rich club”—that carries a disproportionate share of fast, strong, long-range communication, while the larger majority network with weaker synapses handles the rest.

The functional payoff is a division of labor that isn’t a strict binary split. The continuum implied by log-normality supports both specialist neurons (tuned to specific distinctions) and generalizers (active across many contexts), with no sharp boundary between them. Such graded organization is argued to be more energy-efficient and more robust to noise and failures than a rigid all-or-nothing architecture.

Finally, the transcript points to a plausible mechanism for multiplicative growth: synaptic spine sizes. Spine enlargement at the receiving end of synapses is plastic and changes over time. Reported findings suggest that the rate of change in spine size is proportional to its current size—an ingredient that naturally generates multiplicative dynamics. While it remains unclear how log-normality emerges everywhere (from firing rates to spatial neuron distributions to perception), the recurring theme is that distribution shape can act like a fingerprint of underlying computation: multiplicative processes tend to produce log-normal skew, and that skew appears to be built into how brains route and process information.

Cornell Notes

Firing rates in the brain are better described by a log-normal distribution than by a Gaussian. The key math move is that products become sums under a logarithm: if many independent factors multiply to determine a quantity, then the log of that quantity tends toward a normal distribution, producing a skewed log-normal shape on the original scale. This skew implies a heavy right tail—most neurons fire slowly, while a minority fire much faster (on the order of ~10 Hz). Similar log-normal patterns show up in synaptic weights and axonal properties, supporting a “rich club” of fast, strongly connected neurons that carries a large share of fast information flow. The transcript links multiplicative dynamics to synaptic spine growth, where spine-size change scales with current size.

Why does a log-normal distribution emerge from multiplicative randomness, while a Gaussian emerges from additive randomness?

Additive randomness: when many independent hidden variables are summed, the central limit theorem drives the result toward a symmetric bell curve (Gaussian). Multiplicative randomness: when many random factors multiply, taking the logarithm converts the product into a sum of logs. Since sums of independent terms tend toward a normal distribution, the log(quantity) becomes approximately normal. Mapping that normal distribution back onto the original (linear) scale yields a log-normal distribution—skewed with a heavy tail.

How does log-normality change the interpretation of “typical” neuronal activity?

With a Gaussian, most values cluster tightly around the mean. With a log-normal distribution, most neurons still sit near a slow-spiking majority (around ~1 action potential per second), but the distribution’s heavy tail makes high firing rates comparatively common among a minority. The transcript gives an example: roughly ~10% of neurons can spike rapidly, reaching up to about 10 Hz—far more than a symmetric bell curve would predict.

What neural structures are described as following log-normal patterns besides firing rates?

Synaptic weights and axonal conduction properties. Synaptic weights are said to be log normally distributed: many weak synapses, plus a smaller set of strong synapses that act like reliable transmission highways. Neurons with higher firing rates are also described as forming stronger connections and growing thicker axons, which improve insulation and speed up communication—again aligning with a skewed, heavy-tailed distribution of capabilities.

What is the “rich club” idea, and how does it relate to a continuum rather than a binary split?

The transcript describes a fast-spiking minority network where strongly connected neurons and fast-conducting axons cluster together, carrying a large share of fast information. At the same time, the remaining majority network with weaker synapses contributes the other half of information processing. Importantly, the organization isn’t strictly two groups; it’s a continuum across the log-normal distribution, supporting graded roles that range from generalizers to specialists rather than a hard boundary.

How could synaptic spine growth produce multiplicative dynamics?

Spine sizes at synapses are plastic and change during learning. The transcript highlights a reported proportionality: the rate of change in a spine’s size is proportional to its current size. That kind of rule naturally generates multiplicative growth (small changes scale with the existing magnitude), which is exactly the setup that tends to produce log-normal outcomes when many such factors accumulate over time.

Why does the transcript treat distribution shape as evidence about underlying mechanisms?

Because distribution shape can act like a fingerprint. If a variable is produced by additive accumulation of many independent influences, a Gaussian is expected. If it’s produced by multiplicative scaling across many factors, a log-normal skew is expected. Observing log-normality across firing rates, synaptic weights, and other neural parameters suggests that multiplicative dynamics may be a common underlying driver.

Review Questions

  1. If the log of a positive random variable is approximately normal, what distribution does the variable itself follow, and why?
  2. How would you distinguish a model based on additive hidden factors from one based on multiplicative hidden factors using only the shape of the observed distribution?
  3. What functional advantages might a heavy-tailed (log-normal) distribution provide for neural computation compared with a symmetric Gaussian distribution?

Key Points

  1. 1

    Firing rates are described as log-normal rather than Gaussian, implying a skewed distribution with a heavy right tail.

  2. 2

    Log-normality follows when many independent factors multiply to determine a positive quantity; taking logs turns the product into a sum that approaches normality.

  3. 3

    A log-normal distribution supports a fast-spiking minority (around ~10% reaching up to ~10 Hz) alongside a slow-spiking majority (around ~1 Hz).

  4. 4

    Synaptic weights and axonal properties are also described as log-normally distributed, enabling a “rich club” of strongly connected, fast-conducting neurons.

  5. 5

    Generalizers and specialists emerge as a continuum across the log-normal spectrum rather than as two discrete neuron classes.

  6. 6

    A proposed mechanism for multiplicative dynamics is synaptic spine growth, where the rate of spine-size change scales with current spine size.

  7. 7

    Energy efficiency and robustness are argued to improve under graded, non-binary organization implied by log-normal distributions.

Highlights

A Gaussian distribution naturally arises from sums of many independent random influences; log-normality arises when the underlying process is multiplicative and the log of the quantity becomes additive.
Because log-normal firing rates have a heavy tail, a minority of neurons can dominate fast information flow—while most neurons remain in a slower regime.
Log-normal patterns appear not only in firing rates but also in synaptic weights and axonal conduction-related features, reinforcing a multi-scale “rich club” picture.
Proportional spine-size change (growth rate proportional to current size) is presented as a mechanism that can generate multiplicative dynamics and thus log-normal outcomes.

Topics

Mentioned

  • Yuri Pajaki