In earlier work, we meticulously documented the distribution of synaptic weights and the gain (or activation function) in many different brain areas. We found a remarkable consistency of heavy-tailed, specifically lognormal, distributions for firing rates, synaptic weights and gains (Scheler2017).

*Why are biological neural networks heavy-tailed (lognormal)?*

*Why are biological neural networks heavy-tailed (lognormal)?*

**Cell assemblies**: Lognormal networks support models of a * hierarchically* organized cell assembly (ensembles). Individual neurons can activate or suppress a whole cell assembly if they are the strongest neuron or directly connect to the strongest neurons (TeramaeJetal2012).

**Storage:**Sparse strong synapses store stable information and provide a

*of information processing. More frequent weak synapses are more flexible and add changeable detail to the backbone. Heavy-tailed distributions allow a hierarchy of stability and importance.*

**backbone****Time delay**of activation is reduced because strong synapses activate quickly a whole assembly (IyerRetal2013). This reduces the

**, which is dependent on the synaptic and intrinsic distribution. Heavy-tailed distributions activate fastest.**

*initial response time***Noise response:**Under additional input, noise or patterned, the pattern

**of the existing ensemble is higher (IyerRetal2013, see also KirstCetal2016). This is a side effect of integration of all computations within a cell assembly.**

*stability**Why hierarchical computations in a neural network?*

*Why hierarchical computations in a neural network?*

Calculations which depend on interactions between many discrete points (N-body problems, Barnes and Hut 1986), such as particle-particle methods, where every point depends on all others, lead to an O(N^2) calculation. If we supplant this by hierarchical methods, and combine information from multiple points, we can reduce the computational complexity to O(N log N) or O(N).

Since biological neural networks are not feedforward but connect in both forward and backward directions, they have a different structure from ANNs (artificial neural networks) – they consist of hierarchically organised ensembles with few wide-range excitability ‘hub’ neurons and many ‘leaf’ neurons with low connectivity and small-range excitability. Patterns are stored in these ensembles, and get accessed by a fit to an incoming pattern that could be expressed by low mutual information as a measure of similarity. Patterns are modified by similar access patterns, but typically only in their weak connections (else the accessing pattern would not fit).