Logarithmic nature of the brain đź’ˇ
Based on Artem Kirsanov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Firing rates are described as log-normal rather than Gaussian, implying a skewed distribution with a heavy right tail.
Briefing
Neuronal firing rates don’t cluster like a classic bell curve; they follow a log-normal distribution—an outcome that falls out naturally when brain-relevant quantities arise through multiplicative dynamics rather than additive ones. That distinction matters because log-normality produces a characteristic skew: most neurons fire at modest rates, while a minority fire much faster, forming a heavy-tailed “fast lane” that can dominate information flow and computation.
The argument starts by contrasting the normal distribution’s usual origin—sums of many independent random influences—with what happens when those influences multiply. If a neuron’s firing rate were built from many additive, hidden factors, the central limit theorem would drive the result toward a symmetric Gaussian shape. But firing rates are better modeled by products: take many random variables that each scale a quantity up or down, then multiply them. Taking logarithms turns products into sums, so the log of the firing rate becomes approximately normal. When that normal distribution is mapped back onto a linear firing-rate axis, the result is log-normal: a long right tail and strong asymmetry.
That same skew shows up across multiple neural scales. Roughly speaking, most neurons sit in a slow-spiking regime—around one action potential per second—while a smaller fraction spikes rapidly, with rates reaching about 10 Hz. Synaptic weights follow the same pattern: most synapses are weak, but a subset of strong synapses forms reliable “highways” for signal transmission. High firing-rate neurons tend to connect to each other more strongly, and they also tend to grow thicker axons, improving conduction speed through better insulation. Together, these features create a fast-spiking minority network—often described as a “rich club”—that carries a disproportionate share of fast, strong, long-range communication, while the larger majority network with weaker synapses handles the rest.
The functional payoff is a division of labor that isn’t a strict binary split. The continuum implied by log-normality supports both specialist neurons (tuned to specific distinctions) and generalizers (active across many contexts), with no sharp boundary between them. Such graded organization is argued to be more energy-efficient and more robust to noise and failures than a rigid all-or-nothing architecture.
Finally, the transcript points to a plausible mechanism for multiplicative growth: synaptic spine sizes. Spine enlargement at the receiving end of synapses is plastic and changes over time. Reported findings suggest that the rate of change in spine size is proportional to its current size—an ingredient that naturally generates multiplicative dynamics. While it remains unclear how log-normality emerges everywhere (from firing rates to spatial neuron distributions to perception), the recurring theme is that distribution shape can act like a fingerprint of underlying computation: multiplicative processes tend to produce log-normal skew, and that skew appears to be built into how brains route and process information.
Cornell Notes
Firing rates in the brain are better described by a log-normal distribution than by a Gaussian. The key math move is that products become sums under a logarithm: if many independent factors multiply to determine a quantity, then the log of that quantity tends toward a normal distribution, producing a skewed log-normal shape on the original scale. This skew implies a heavy right tail—most neurons fire slowly, while a minority fire much faster (on the order of ~10 Hz). Similar log-normal patterns show up in synaptic weights and axonal properties, supporting a “rich club” of fast, strongly connected neurons that carries a large share of fast information flow. The transcript links multiplicative dynamics to synaptic spine growth, where spine-size change scales with current size.
Why does a log-normal distribution emerge from multiplicative randomness, while a Gaussian emerges from additive randomness?
How does log-normality change the interpretation of “typical” neuronal activity?
What neural structures are described as following log-normal patterns besides firing rates?
What is the “rich club” idea, and how does it relate to a continuum rather than a binary split?
How could synaptic spine growth produce multiplicative dynamics?
Why does the transcript treat distribution shape as evidence about underlying mechanisms?
Review Questions
- If the log of a positive random variable is approximately normal, what distribution does the variable itself follow, and why?
- How would you distinguish a model based on additive hidden factors from one based on multiplicative hidden factors using only the shape of the observed distribution?
- What functional advantages might a heavy-tailed (log-normal) distribution provide for neural computation compared with a symmetric Gaussian distribution?
Key Points
- 1
Firing rates are described as log-normal rather than Gaussian, implying a skewed distribution with a heavy right tail.
- 2
Log-normality follows when many independent factors multiply to determine a positive quantity; taking logs turns the product into a sum that approaches normality.
- 3
A log-normal distribution supports a fast-spiking minority (around ~10% reaching up to ~10 Hz) alongside a slow-spiking majority (around ~1 Hz).
- 4
Synaptic weights and axonal properties are also described as log-normally distributed, enabling a “rich club” of strongly connected, fast-conducting neurons.
- 5
Generalizers and specialists emerge as a continuum across the log-normal spectrum rather than as two discrete neuron classes.
- 6
A proposed mechanism for multiplicative dynamics is synaptic spine growth, where the rate of spine-size change scales with current spine size.
- 7
Energy efficiency and robustness are argued to improve under graded, non-binary organization implied by log-normal distributions.