Language in the brain

We need quite different functionality than statistics to model language. We may need this functionality in other areas of intelligence, but with language it is obvious. Or, for the DL community – we can model anything with statistics, of course. It will just not be a very good model …

What follows is that the synaptic plasticity that produces statistical learning does not allow us to build a human language model. Weight adjustment of connections in a graph is simply not sufficient – under any possible model of language to capture language competency.

This is where it becomes interesting. We have just stated that the synaptic plasticity hypothesis of memory is wrong, or else our mammalian brains would be incapable of producing novel sentences and novel text, something we have not memorized before.

Learning in the Brain: Difference learning vs. Associative learning

The feedforward/feedback learning and interaction in the visual system has been analysed as a case of “predictive coding” , the “free energy principle” or “Bayesian perception”. The general principle is very simple, so I will call it “difference learning”. I believe that this is directly comparable (biology doesn’t invent, it re-invents) to what is happening at the cell membrane between external (membrane) and internal (signaling) parameters.

It is about difference modulation: an existing or quiet state, and then new signaling (at the membrane) or by perception (in the case of vision). Now the system has to adapt to the new input. The feedback connections transfer back the old categorization of the new input. This gets added to the perception so that a percept evolves which uses the old categorization together with the new input to achieve quickly an adequate categorization for any perceptual input. There will be a bias of course in favor of existing knowledge, but that makes sense in a behavioral context.

The same thing happens at the membrane. An input signal activates membrane receptors (parameters). The internal parameters – the control structure – transfers back the stored response to the external membrane parameters. And the signal generates a suitable neuronal response according to its effect on external (bottom-up) together with the internal control structure (top-down). The response is now biased in favor of an existing structure, but it also means all signals can quickly be interpreted.

If a signal overcomes a filter, new adaptation and learning of the parameters can happen.

The general principle is difference learning, adaptation on the basis of a difference between encoded information and a new input. This general principle underlies all membrane adaptation, whether at the synapse or the spine, or the dendrite, and all types of receptors, whether AMPA, GABA or GPCR.

We are used to believe that the general principle of neural plasticity is associative learning. This is an entirely different principle and merely derivative of difference learning in certain contexts. Associative learning as the basis of synaptic plasticity goes back more than a 100 years. The idea was that by exposure to different ideas or objects, the connection between them in the mind was strengthened. And it was then conjectured that two neurons (A and B) both of which are activated would strengthen their connection (from A to B). More precisely, as was later often found, A needed to fire earlier than B, in order to encode a sequential relation.

What would be predicted by difference learning? An existing connection would encode the strength of synaptic activation at that site. As long as the actual signal matches, there is no need for adaptation. If it becomes stronger, the synapse may acquire additional receptors by using its internal control structure. This control structure may have requirements about sequentiality. The control structure may also be updated to make the new strength permanent, a new set-point parameter. On the other hand, a weaker than memorized signal will ultimately lead the synapse to wither and die.

Similar outcomes, entirely different principles. Association is encoded by any synapse, and since membrane receptors are plastic, associative learning is a restricted derivative of difference learning.

Why a large cortex?

mouse

If we compare a small mouse cortex with a large human cortex, the connectivity per neuron is approximately the same (10^4/neuron SchuezPalm1989). So why did humans add so many neurons, and why did the connectivity remain constant? For the latter question we may conjecture that a maximal size is already reached in the mouse. Our superior human cognitive skills thus rest on the increased number of neurons in cortex, which means the number of modules (cortical microcolumns) went up, not the synaptic connectivity as such.

Soft coded Synapses

A new preprint by Filipovicetal2009* shows that striatal projection neurons (MSNs) receive different amounts of input, dependent on whether they are D2-modulated, and part of the indirect pathway, or D1-modulated, and part of the direct pathway. In particular membrane fluctuations are higher in the D1-modulated neurons (mostly at higher frequencies): they receive both more inhibitory and excitatory input. This also means that they are activated faster.

The open question is: what drives the difference in input? Do they have stronger synapses or more synapses? If the distribution of synaptic strength is indeed a universal, they could have stronger synapses overall (different peak of the distribution), or more synapses (area under the curve).

Assuming that synapses adapt to the level of input they receive, having stronger synapses would be equivalent to being connected to higher frequency neurons; but there would be a difference in terms of fluctuations of input. Weak synapses have low fluctuations of input, while strong synapses, assuming they are sent out from neurons with a higher frequency range, have larger fluctuations in input to the postsynaptic neuron.

It is also possible that the effect results from a higher amount of correlation in synaptic input to D1-modulated neurons than D2-modulated neurons. However, since correlations are an adaptive feature in neural processing, it would be unusual to have an overall higher level of correlation to one of two similar neuronal groups: it would be difficult to maintain concurrently with fluctuations in correlation which are meaningful to processing (attention).

An additional observation is that dopamine depletion reduces the difference between D2- and D1-modulated MSNs. Since membrane fluctuations are due to differences of synaptic input (AMPA and GABA-A driven), but there is only conflicting evidence that D1 receptors modulate these receptors (except NMDA receptors), one would postulate a presynaptic effect. So, possibly the effect is located at indirect pathway, D2-modulated neurons, which receive less input when dopamine is present, and adjust to a lower level of synaptic input. (Alternatively, reduction of D1 activation could result in less NMDA/ AMPA, more GABA-A, i.e. less synaptic input in a D1 dopamine-dependent way.) In the dopamine depleted mouse, both pathways would receive approximately similar input.   Under this hypothesis, it is not primarily differences in structural plasticity which result in different synaptic input levels, but instead a “soft-coded” (dopamine-coded)  difference, which depends on dopamine levels and is realized by presynaptic/postsynaptic dopamine receptors. Further results will clarify this question.

*Thanks to Marko Filipovic for his input. The interpretations are my own.

Heavy-tailed distributions and hierarchical cell assemblies

In earlier work, we meticulously documented the distribution of synaptic weights and the gain (or activation function) in many different brain areas. We found a remarkable consistency of heavy-tailed, specifically lognormal, distributions for firing rates, synaptic weights and gains (Scheler2017).

Why are biological neural networks heavy-tailed (lognormal)?

Cell assemblies: Lognormal networks support models of a hierarchically organized cell assembly (ensembles). Individual neurons can activate or suppress a whole cell assembly if they are the strongest neuron or directly connect to the strongest neurons (TeramaeJetal2012).
Storage: Sparse strong synapses store stable information and provide a backbone of information processing. More frequent weak synapses are more flexible and add changeable detail to the backbone. Heavy-tailed distributions allow a hierarchy of stability and importance.
Time delay of activation is reduced because strong synapses activate quickly a whole assembly (IyerRetal2013). This reduces the initial response time, which is dependent on the synaptic and intrinsic distribution. Heavy-tailed distributions activate fastest.
Noise response: Under additional input, noise or patterned, the pattern stability of the existing ensemble is higher (IyerRetal2013, see also KirstCetal2016). This is a side effect of integration of all computations within a cell assembly.

Why hierarchical computations in a neural network?

Calculations which depend on interactions between many discrete points (N-body problems, Barnes and Hut 1986), such as particle-particle methods, where every point depends on all others, lead to an O(N^2) calculation. If we supplant this by hierarchical methods, and combine information from multiple points, we can reduce the computational complexity to O(N log N) or O(N).

Since biological neural networks are not feedforward but connect in both forward and backward directions, they have a different structure from ANNs (artificial neural networks) – they consist of hierarchically organised ensembles with few wide-range excitability ‘hub’ neurons and many ‘leaf’ neurons with low connectivity and small-range excitability. Patterns are stored in these ensembles, and get accessed by a fit to an incoming pattern that could be expressed by low mutual information as a measure of similarity. Patterns are modified by similar access patterns, but typically only in their weak connections (else the accessing pattern would not fit).

Epigenetics and memory

Epigenetic modification is a powerful mechanism for the induction, the expression and persistence of long-term memory.

For long-term memory, we need to consider diverse cellular processes. These occur in neurons from different brain regions (in particular hippocampus, cortex, amygdala) during memory consolidation and recall. For instance, long-term changes in kinase expression in the proteome, changes in receptor subunit composition and localization at synaptic/dendritic membranes, epigenetic modifications of chromatin such as DNA methylation and histone methylation in the nucleus, and the posttranslational modifications of histones, including phosphorylation and acetylation, all these play a role. Histone acetylation is of particular interest because a number of known medications exist, which function as histone deacetylase inhibitors (HDACs), i.e. have a potential to increase DNA transcription and memory (more on this in a later post).

Epigenetic changes are important because they define the internal conditions for plasticity for the individual neuron. They underlie for instance, kinase or phosphatase-mediated (de)activations of enzymatic proteins and therefore influence the probability of membrane proteins to become altered by synaptic activation.

Among epigenetic changes, DNA methylation typically acts to alter, often to repress, DNA transcription at cytosine, or CpG islands in vertebrates. DNA methylation is mediated by enzymes such as Tet3, which catalyses an important step in the demethylation of DNA. In dentate gyrus of live rats, it was shown that the expression of Tet3 is greatly increased by LTP – synaptically induced memorization – , suggesting that certain DNA stretches were demethylated [5], and presumably activated. During induction of LTP by high frequency electrical stimulation, DNA methylation is changed specifically for certain genes known for their role in neural plasticity [1]. The expression of neural plasticity genes is widely correlated with the methylation status of the corresponding DNA .

So there is interesting evidence for filtering the induction of plasticity via the epigenetic landscape and modifiers of gene expression, such as HDACs. Substances which act as histone deacetylase inhibitors (HDACs) increase histone acetylation. An interesting result from research on fear suggests that HDACs increase some DNA transcription, and enhance specifically fear extinction memories [2], [3],[4]. 

Ion channel expression is not regulated by spiking behavior

An important topic to understand intrinsic excitability is the distribution and activation of ion channels. In this respect the co-regulations between ion channels are of significant interest. MacLean et al. (2003) could show that overexpression of an A-type potassium channel by shal-RNA-injection in neurons of the stomatogastric ganglion of the lobster is compensated by upregulation of Ih such that the spiking behavior remained unaltered.

A non functional shal-mutant whose overexpression did not affect spiking had the same effect, which shows that the regulation does not happen at the site of the membrane, by measuring the spiking behavior. In this case, Ih was upregulated, even though IA activity was unaltered, and spiking behavior was increased. (This is in contrast to e.g. O’Leary et al., 2013, who assume homeostatic regulation of ion channel expression at the membrane, by spiking behavior.)

In drosophila-motoneurons the expression of shal and shaker – both responsible for IA – is reciprocally coupled. If one is reduced, the other is upregulated to a constant level of IA activity at the membrane. Other ion channels, like (INAp and IM) are again antagonistic, which means they correlate positively: if one is reduced, the other is reduced as well to achieve the same level of effect (Golowasch2014). There are a number of publications which have all documented similar effects, e.g. (MacLean et al., 2005, Schulz et al., 2007; Tobin et al., 2009; O’Leary et al., 2013).

We must assume that the expression level of ion channels is regulated and sensed inside the cell and that the levels of genes for different ion channels are coupled – by genetic regulation or on the level of RNA regulation.

To summarize: When there is high IA expression, Ih is also upregulated. When one gene responsible for IA is suppressed, the other gene is more highly expressed, to achieve the same level of IA expression. When (INap), a permanent sodium channel, is reduced, (IM), a potassium channel, is also reduced.

It is important to note that these ion channels may compensate for each other in terms of overall spiking behavior, but they have subtly different properties of activation, e.g. by the pattern of spiking or by neuromodulation. For instance, if cell A reduces ion channel currents like INap and IM, compensating to achieve the same spiking behavior, once we apply neuromodulation to muscarinic receptors on A, this will affect IM, but not INap. The behavior of cell A, crudely the same, is now altered under certain conditions.

To model this – other than by a full internal cell model – requires internal state variables which guide ion channel expression, and therefore regulate intrinsic excitability. These variables would model ion channel proteins and their respective interaction, and in this way guarantee acceptable spiking behavior of the cell. This could lead to the idea of an internal module which sets the parameters necessary for the neuron to function. Such an internal module that self-organizes its state variables according to specified objective functions could greatly simplify systems design. Instead of tuning systems parameters by outside methods – which is necessary for ion-channel based models – each neuronal unit itself would be responsible for its ion channels and be able to self-tune them separately from the whole system.

Linked to the idea of internal state variables is the idea of internal memory, which I have referred to several times in this blog. If I have an internal module of co-regulated variables, which set external parameters for each unit, then this module may serve as a latent memory for variables which are not expressed at the membrane at the present time (s. Er81). The time course of expression and activation at the membrane and of internal co-regulation need not be the same. This offers an opportunity for memory inside the cell, separated from information processing within a network of neurons.