Language in the brain

We need quite different functionality than statistics to model language. We may need this functionality in other areas of intelligence, but with language it is obvious. Or, for the DL community – we can model anything with statistics, of course. It will just not be a very good model …

What follows is that the synaptic plasticity that produces statistical learning does not allow us to build a human language model. Weight adjustment of connections in a graph is simply not sufficient – under any possible model of language to capture language competency.

This is where it becomes interesting. We have just stated that the synaptic plasticity hypothesis of memory is wrong, or else our mammalian brains would be incapable of producing novel sentences and novel text, something we have not memorized before.

Learning in the Brain: Difference learning vs. Associative learning

The feedforward/feedback learning and interaction in the visual system has been analysed as a case of “predictive coding” , the “free energy principle” or “Bayesian perception”. The general principle is very simple, so I will call it “difference learning”. I believe that this is directly comparable (biology doesn’t invent, it re-invents) to what is happening at the cell membrane between external (membrane) and internal (signaling) parameters.

It is about difference modulation: an existing or quiet state, and then new signaling (at the membrane) or by perception (in the case of vision). Now the system has to adapt to the new input. The feedback connections transfer back the old categorization of the new input. This gets added to the perception so that a percept evolves which uses the old categorization together with the new input to achieve quickly an adequate categorization for any perceptual input. There will be a bias of course in favor of existing knowledge, but that makes sense in a behavioral context.

The same thing happens at the membrane. An input signal activates membrane receptors (parameters). The internal parameters – the control structure – transfers back the stored response to the external membrane parameters. And the signal generates a suitable neuronal response according to its effect on external (bottom-up) together with the internal control structure (top-down). The response is now biased in favor of an existing structure, but it also means all signals can quickly be interpreted.

If a signal overcomes a filter, new adaptation and learning of the parameters can happen.

The general principle is difference learning, adaptation on the basis of a difference between encoded information and a new input. This general principle underlies all membrane adaptation, whether at the synapse or the spine, or the dendrite, and all types of receptors, whether AMPA, GABA or GPCR.

We are used to believe that the general principle of neural plasticity is associative learning. This is an entirely different principle and merely derivative of difference learning in certain contexts. Associative learning as the basis of synaptic plasticity goes back more than a 100 years. The idea was that by exposure to different ideas or objects, the connection between them in the mind was strengthened. And it was then conjectured that two neurons (A and B) both of which are activated would strengthen their connection (from A to B). More precisely, as was later often found, A needed to fire earlier than B, in order to encode a sequential relation.

What would be predicted by difference learning? An existing connection would encode the strength of synaptic activation at that site. As long as the actual signal matches, there is no need for adaptation. If it becomes stronger, the synapse may acquire additional receptors by using its internal control structure. This control structure may have requirements about sequentiality. The control structure may also be updated to make the new strength permanent, a new set-point parameter. On the other hand, a weaker than memorized signal will ultimately lead the synapse to wither and die.

Similar outcomes, entirely different principles. Association is encoded by any synapse, and since membrane receptors are plastic, associative learning is a restricted derivative of difference learning.

Cocaine Dependency and restricted learning

Substantial work (NasifFJetal2005a, NasifFJetal2005b , DongYetal2005HuXT2004, Marinellietal2006) has shown that repeated cocaine administration changes the intrinsic excitability of prefrontal cortical (PFC) neurons (in rats), by altering the expression of ion channels. It downregulates voltage-gated K+ channels, and increases membrane excitability in principal (excitatory) PFC neurons.

An important consequence of this result is the following: by restricting expression levels of major ion channels, the capacity of the neuron to undergo intrinsic plasticity (IP) is limited, and therefore its learning or storage capacity is reduced.

Why is IP important?

It is often assumed that the “amount” of information that can be stored by the whole neuron is restricted compared to each of its synapses, and therefore IP cannot have a large role in neural computation. This view is based on a number of assumptions, namely (a) that IP is only expressed by a single parameter such as a firing threshold or a bit value indicating internal calcium release, (b) that IP could be replaced by a “bias term” for each neuron, essentially another parameter on a par with its synaptic parameters and trainable along with these (c) at most, that this bias term is multiplicative, not additive like synaptic parameters, but still just one learnable parameter and (d) that synapses are independently trainable, on the basis of associative activation, without a requirement of the whole neuron to undergo plasticity. Since the biology of intrinsic excitability and plasticity is very complex, there are very many aspects of it, which could be relevant in a neural circuit (or tissue) and it is challenging to extract plausible components which could be most significant for IP – it is certainly a fruitful area for further study.

In our latest paper we challenge mostly (d), i.e. we advocate a model, where IP implies localist SP (synaptic plasticity), and therefore the occurrence of SP is tied to the occurrence of IP in a neuron. In this sense the whole neuron extends control of plasticity over its synapses, in this particular model, over its dendritic synapses. It is well-known that some neurons, such as in dentate gyrus, exhibit primarily presynaptic plasticity, i.e. control over axonal synapses (mossy fiber contacts onto hippocampal CA3 neurons), but we have not focused on this question concerning cortex from the biological point of view. In any case, if this model captures an important generalization, then cocaine dependency leads to reduced IP, and, as a consequence reduced SP, at a principal neuron’s site. If the neuron is reduced in its ability to learn, i.e. to adjust its voltage-gated K+ channels, such that it operates with heightened membrane excitability, then its dendritic synapses should also be restricted in their capacity to learn (for instance to undergo LTD).

As a matter of fact, a more recent paper (Otisetal2018) shows that if we block intrinsic excitability during recall in a specific area of the PFC (prelimbic PFC), memories encoded in this area are actually prevented from becoming activated.

Soft coded Synapses

A new preprint by Filipovicetal2009* shows that striatal projection neurons (MSNs) receive different amounts of input, dependent on whether they are D2-modulated, and part of the indirect pathway, or D1-modulated, and part of the direct pathway. In particular membrane fluctuations are higher in the D1-modulated neurons (mostly at higher frequencies): they receive both more inhibitory and excitatory input. This also means that they are activated faster.

The open question is: what drives the difference in input? Do they have stronger synapses or more synapses? If the distribution of synaptic strength is indeed a universal, they could have stronger synapses overall (different peak of the distribution), or more synapses (area under the curve).

Assuming that synapses adapt to the level of input they receive, having stronger synapses would be equivalent to being connected to higher frequency neurons; but there would be a difference in terms of fluctuations of input. Weak synapses have low fluctuations of input, while strong synapses, assuming they are sent out from neurons with a higher frequency range, have larger fluctuations in input to the postsynaptic neuron.

It is also possible that the effect results from a higher amount of correlation in synaptic input to D1-modulated neurons than D2-modulated neurons. However, since correlations are an adaptive feature in neural processing, it would be unusual to have an overall higher level of correlation to one of two similar neuronal groups: it would be difficult to maintain concurrently with fluctuations in correlation which are meaningful to processing (attention).

An additional observation is that dopamine depletion reduces the difference between D2- and D1-modulated MSNs. Since membrane fluctuations are due to differences of synaptic input (AMPA and GABA-A driven), but there is only conflicting evidence that D1 receptors modulate these receptors (except NMDA receptors), one would postulate a presynaptic effect. So, possibly the effect is located at indirect pathway, D2-modulated neurons, which receive less input when dopamine is present, and adjust to a lower level of synaptic input. (Alternatively, reduction of D1 activation could result in less NMDA/ AMPA, more GABA-A, i.e. less synaptic input in a D1 dopamine-dependent way.) In the dopamine depleted mouse, both pathways would receive approximately similar input.   Under this hypothesis, it is not primarily differences in structural plasticity which result in different synaptic input levels, but instead a “soft-coded” (dopamine-coded)  difference, which depends on dopamine levels and is realized by presynaptic/postsynaptic dopamine receptors. Further results will clarify this question.

*Thanks to Marko Filipovic for his input. The interpretations are my own.

Heavy-tailed distributions and hierarchical cell assemblies

In earlier work, we meticulously documented the distribution of synaptic weights and the gain (or activation function) in many different brain areas. We found a remarkable consistency of heavy-tailed, specifically lognormal, distributions for firing rates, synaptic weights and gains (Scheler2017).

Why are biological neural networks heavy-tailed (lognormal)?

Cell assemblies: Lognormal networks support models of a hierarchically organized cell assembly (ensembles). Individual neurons can activate or suppress a whole cell assembly if they are the strongest neuron or directly connect to the strongest neurons (TeramaeJetal2012).
Storage: Sparse strong synapses store stable information and provide a backbone of information processing. More frequent weak synapses are more flexible and add changeable detail to the backbone. Heavy-tailed distributions allow a hierarchy of stability and importance.
Time delay of activation is reduced because strong synapses activate quickly a whole assembly (IyerRetal2013). This reduces the initial response time, which is dependent on the synaptic and intrinsic distribution. Heavy-tailed distributions activate fastest.
Noise response: Under additional input, noise or patterned, the pattern stability of the existing ensemble is higher (IyerRetal2013, see also KirstCetal2016). This is a side effect of integration of all computations within a cell assembly.

Why hierarchical computations in a neural network?

Calculations which depend on interactions between many discrete points (N-body problems, Barnes and Hut 1986), such as particle-particle methods, where every point depends on all others, lead to an O(N^2) calculation. If we supplant this by hierarchical methods, and combine information from multiple points, we can reduce the computational complexity to O(N log N) or O(N).

Since biological neural networks are not feedforward but connect in both forward and backward directions, they have a different structure from ANNs (artificial neural networks) – they consist of hierarchically organised ensembles with few wide-range excitability ‘hub’ neurons and many ‘leaf’ neurons with low connectivity and small-range excitability. Patterns are stored in these ensembles, and get accessed by a fit to an incoming pattern that could be expressed by low mutual information as a measure of similarity. Patterns are modified by similar access patterns, but typically only in their weak connections (else the accessing pattern would not fit).

Epigenetics and memory

Epigenetic modification is a powerful mechanism for the induction, the expression and persistence of long-term memory.

For long-term memory, we need to consider diverse cellular processes. These occur in neurons from different brain regions (in particular hippocampus, cortex, amygdala) during memory consolidation and recall. For instance, long-term changes in kinase expression in the proteome, changes in receptor subunit composition and localization at synaptic/dendritic membranes, epigenetic modifications of chromatin such as DNA methylation and histone methylation in the nucleus, and the posttranslational modifications of histones, including phosphorylation and acetylation, all these play a role. Histone acetylation is of particular interest because a number of known medications exist, which function as histone deacetylase inhibitors (HDACs), i.e. have a potential to increase DNA transcription and memory (more on this in a later post).

Epigenetic changes are important because they define the internal conditions for plasticity for the individual neuron. They underlie for instance, kinase or phosphatase-mediated (de)activations of enzymatic proteins and therefore influence the probability of membrane proteins to become altered by synaptic activation.

Among epigenetic changes, DNA methylation typically acts to alter, often to repress, DNA transcription at cytosine, or CpG islands in vertebrates. DNA methylation is mediated by enzymes such as Tet3, which catalyses an important step in the demethylation of DNA. In dentate gyrus of live rats, it was shown that the expression of Tet3 is greatly increased by LTP – synaptically induced memorization – , suggesting that certain DNA stretches were demethylated [5], and presumably activated. During induction of LTP by high frequency electrical stimulation, DNA methylation is changed specifically for certain genes known for their role in neural plasticity [1]. The expression of neural plasticity genes is widely correlated with the methylation status of the corresponding DNA .

So there is interesting evidence for filtering the induction of plasticity via the epigenetic landscape and modifiers of gene expression, such as HDACs. Substances which act as histone deacetylase inhibitors (HDACs) increase histone acetylation. An interesting result from research on fear suggests that HDACs increase some DNA transcription, and enhance specifically fear extinction memories [2], [3],[4]. 

Egocentric representations for general input

The individual neuron’s state need not be determined only by the inputs received.

(a). It may additionally be seeded with a probability for adaptation that is distributed wrt the graph properties of the neuron (like betweenness centrality, choke points etc.), as well as the neuron’s current intrinsic excitability (IE) (which are related). This seeded probability would correspond to a sensitivity of the neuron to the representation that is produced by the subnetwork. The input representation is transformed by the properties of the subnetwork.

(b). Another way to influence neurons independent of their input is to link them together. This can be done by simulating of neuromodulators (NMs) which influence adaptivity for a subset of neurons within the network. There are then neurons which are linked together and increase or turn on their adaptivity because they share the same NM receptors. Different sets of neurons can become activated and increase their adaptivity, whenever a sufficient level of a NM is reached. An additional learning task is then to identify suitable sets of neurons. For instance, neurons may encode aspects of the input representation that result from additional, i.e. attentional, signals co-occuring with the input.

(c). Finally, both E and I neurons are known to consist of morphologically and genetically distinct types. This opens up additional ways of creating heterogeneous networks from these neuron types and have distinct adaptation rules for them. Some of the neurons may not even be adaptive, or barely adaptive, while others may be adaptive only once, (write once, read-only), or be capable only of upregulation, until they have received their limit. (This applies to synaptic and intrinsic adaptation). Certain neurons may have to follow the idea of unlimited adaptation in both directions in order to make such models viable.

Similar variants in neuron behavior are known from technical applications of ANNs: hyperparameters that link individual parameters into groups (‘weight sharing’) have been used, terms like ‘bypassing’ mean that some neurons do not adjust, only transmit, and ‘gating’ means that neurons may regulate the extent of transmission of a signal (cf. LSTM, ScardapaneSetal2018). Separately, the model ADAM (or ADAMW) has been proposed which computes adaptive learning rates for each neuron and achieves fast convergence.

A neuron-centric biological network model (‘neuronal automaton’) offers a systematic approach to such differences in adaptation. As suggested, biological neurons have different capacities for adaptation and this may extend to their synaptic connections as well. The model would allow to learn different activation functions and different adaptivity for each neuron, helped by linking neurons into groups and using fixed genetic types in the setup of the network. In each specific case the input is represented by the structural and functional constraints of the network and therefore transformed into an internal, egocentric representation.

Transmission is not Adaptation

Current synaptic plasticity models have one decisive property which may not be biologically adequate, and which has important repercussions on the type of memory and learning algorithms in general that can be implemented: Each processing or transmission event is an adaptive learning event.

In contrast, in biology, there are many pathways that may act as filters from the use of a synapse to the adaptation of its strength. In LTP/LTD, typically 20 minutes are necessary to observe the effects. This requires the activation of intracellular pathways, often co-occurence of a GPCR activation, and even nuclear read-out.

Therefore we have suggested a different model, greatly simplified at first to test its algorithmic properties. We include intrinsic excitability in learning (LTP-IE, LTD-IE). The main innovation is that we separate learning or adaptation from processing or transmission. Transmission events leave traces at synapses and neurons that disappear over time (short-term plasticity), unless they add up over time to unusually high (low) neural activations, something that can be determined by threshold parameters. Only if a neuron engages in a high (low) activation-plasticity event we get long-term plasticity at both neurons and synapses, in a localized way. Such a model is in principle capable of operating in a sandpile fashion. We do not know yet what properties the model may exhibit. Certain hypotheses exist, concerning abstraction and compression of a sequence of related inputs, and the development of an individual knowledge.

Memory and the Volatility of Spines

Memory has a physical presence in the brain, but there are no elements which permanently code for it.

Memory is located – among other places – in dendritic spines. Spines are being increased during learning and they carry stimulus or task-specific information. Ablation of spines destroys this information (Hayashi-Takagi A2015). Astrocytes have filopodia which are also extended and retracted and make contact with neuronal synapses. The presence of memory in the spine fits to a neuron-centric view: Spine protrusion and retraction are guided by cellular programs. A strict causality such that x synaptic inputs cause a new spine is not necessarily true, as a matter of fact highly conditional principles of spine formation or dissolution could hold, where the internal state of the neuron and the neuron’s history matters. The rules for spine formation need not be identical to the rules for synapse formation and weight updating (which depend on at least two neurons making contact).

A spine needs to be there for a synapse to exist (in spiny neurons), but once it is there, clearly not all synapses are alike. They differ in the amount of AMPA presence and integration, and other receptors/ion channels as well. For instance, Sk-channels serve to block off a synapse from further change, and may be regarded as a form of overwrite protection. Therefore, the existence or lack of a spine is the first-order adaptation in a spiny neuron, the second-order adaptation involves the synapses themselves.

However, spines are also subject to high variability, on the order of several hours to a few days. Some elements may have very long persistence, months in the mouse, but they are few. MongilloGetal2017 point out the fragility of the synapse and the dendritic spine in pyramidal neurons and ask what this means for the physical basis of memory. Given what we know about neural networks, for memory to be permanent, is it necessary that the same spines remain? Learning allows to operate with many random elements, but memory has prima facie no need for volatility.

It is most likely that memory is a secondary, ’emergent’ property of volatile and highly adaptive structures. From this perspective it is sufficient to keep the information alive, among the information-carrying units, which will recreate it in some form.

The argument is that the information is redundantly coded. So if part of the coding is missing, the rest still carries enough information to inform the system, which recruits new parts to carry the information. The information is never lost, because not all synapses, spines, neurons are degraded at the same time, and because internal reentrant processing keeps the information alive and recreates new redundant parts at the same time as other parts are lost. It is a dynamic cycling of information. There are difficulties, if synapses are supposed to carry the whole information. The main difficulty is: if all patterns at all times are being stored in synaptic values, without undue interference, and with all the complex processes of memory, forgetting, retrieval, reconsolidation etc., can this be fitted to a situation, where the response to a simple visual stimulus already involves 30-40% of the cortical area where there is processing going on? I have no quantitative model for this. I think the model only works if we use all the multiple, redundant forms of plasticity that the neuron possesses: internal states, intrinsic properties, synaptic and morphological properties, axonal growth, presynaptic plasticity.

Balanced Inhibition/Excitation (2) – The I/E ratio

Some time ago, I suggested that the theoretical view on balanced inhibition/excitation (in cortex and cortical models) is probably flawed. I suggested that we have a loose regulation instead, where inhibition and excitation can fluctuate independently.

The I/E balance stems from the idea that the single pyramidal neuron should receive approximately equal strength of inhibition and excitation, in spite of the fact that only 10-20% of neurons in cortex are inhibitory (Destexhe2003, more on that below). Experimental measurements have shown that this conjecture is approximately correct, i.e. inhibitory neurons make stronger contacts, or their influence is stronger relative to excitatory inputs.

The E-I balance in terms of synaptic drive onto a single pyramidal neuron is an instance of antagonistic regulation which allows gear-shifting of inputs, and in this case, allows very strong inputs to be downshifted by inhibition to a weaker effect on the membrane potential. What is the advantage of such a scheme? Strong signals are less prone to noise and uncertainty than weak signals. Weak signals are filtered out by the inhibitory drive. Strong signals allow unequivocal signal transmission, whether excitatory synaptic input, (or phasic increases of dopamine levels, in other contexts), which are then gear-shifted down by antagonistic reception. There may also be a temporal sequence: a strong signal is followed by a negative signal to restrict its time course and reduce impact. In the case of somatic inhibition following dendritic excitation the fine temporal structure could work together with the antagonistic gear-shifting exactly for this goal. Okun and Lampl, 2008 have actually shown that inhibition follows excitation by several milliseconds.

But what are the implications for an E/I network, such as cortex?

Here is an experimental result:

During both task and delay, mediodorsal thalamic (MD) neurons have 30-50% raised firing rates, fast-spiking (FS) inhibitory cortical neurons have likewise 40-60% raised firing rates, but excitatory (regular-spiking, RS) cortical neurons are unaltered. Thus there is an intervention possible, by external input from MD, probably directly to FS neurons, which does not affect RS neuron rate at all (fig. a and c, SchmittLIetal2017)

Untitled 1

Mediodorsal thalamic stimulation raises inhibition, but leaves excitation unchanged.

At the same time, in this experiment, the E-E connectivity is raised (probably by some form of short-term synaptic potentiation), such that E neurons receive more input, which is counteracted by more inhibition. (cf. also Hamilton, L2013). The balance on the level of the single neuron would be kept, but the network exhibits only loose regulation of the I/E ratio: unilateral increase of inhibition.

There are several studies which show that it is possible to raise inhibition and thus enhance cognition, for instance in the mPFC of CNTNAP2 (neurexin, a cell adhesion protein) deficient mice, which have abnormally raised excitation, and altered social behavior (SelimbeyogluAetal2017, cf. Foss-FeigJ2017 for an overview). Also, inhibition is necessary to allow critical period learning – which is hypothesized to be due to a switch from internally generated spontaneous activity to external sensory perception (ToyoizumiT2013) – in line with our suggestion that the gear-shifting effect of locally balanced I/E allows only strong signals to drive excitation and spiking and filters weak, internally generated signals.