Language in the brain

We need quite different functionality than statistics to model language. We may need this functionality in other areas of intelligence, but with language it is obvious. Or, for the DL community – we can model anything with statistics, of course. It will just not be a very good model …

What follows is that the synaptic plasticity that produces statistical learning does not allow us to build a human language model. Weight adjustment of connections in a graph is simply not sufficient – under any possible model of language to capture language competency.

This is where it becomes interesting. We have just stated that the synaptic plasticity hypothesis of memory is wrong, or else our mammalian brains would be incapable of producing novel sentences and novel text, something we have not memorized before.

Learning in the Brain: Difference learning vs. Associative learning

The feedforward/feedback learning and interaction in the visual system has been analysed as a case of “predictive coding” , the “free energy principle” or “Bayesian perception”. The general principle is very simple, so I will call it “difference learning”. I believe that this is directly comparable (biology doesn’t invent, it re-invents) to what is happening at the cell membrane between external (membrane) and internal (signaling) parameters.

It is about difference modulation: an existing or quiet state, and then new signaling (at the membrane) or by perception (in the case of vision). Now the system has to adapt to the new input. The feedback connections transfer back the old categorization of the new input. This gets added to the perception so that a percept evolves which uses the old categorization together with the new input to achieve quickly an adequate categorization for any perceptual input. There will be a bias of course in favor of existing knowledge, but that makes sense in a behavioral context.

The same thing happens at the membrane. An input signal activates membrane receptors (parameters). The internal parameters – the control structure – transfers back the stored response to the external membrane parameters. And the signal generates a suitable neuronal response according to its effect on external (bottom-up) together with the internal control structure (top-down). The response is now biased in favor of an existing structure, but it also means all signals can quickly be interpreted.

If a signal overcomes a filter, new adaptation and learning of the parameters can happen.

The general principle is difference learning, adaptation on the basis of a difference between encoded information and a new input. This general principle underlies all membrane adaptation, whether at the synapse or the spine, or the dendrite, and all types of receptors, whether AMPA, GABA or GPCR.

We are used to believe that the general principle of neural plasticity is associative learning. This is an entirely different principle and merely derivative of difference learning in certain contexts. Associative learning as the basis of synaptic plasticity goes back more than a 100 years. The idea was that by exposure to different ideas or objects, the connection between them in the mind was strengthened. And it was then conjectured that two neurons (A and B) both of which are activated would strengthen their connection (from A to B). More precisely, as was later often found, A needed to fire earlier than B, in order to encode a sequential relation.

What would be predicted by difference learning? An existing connection would encode the strength of synaptic activation at that site. As long as the actual signal matches, there is no need for adaptation. If it becomes stronger, the synapse may acquire additional receptors by using its internal control structure. This control structure may have requirements about sequentiality. The control structure may also be updated to make the new strength permanent, a new set-point parameter. On the other hand, a weaker than memorized signal will ultimately lead the synapse to wither and die.

Similar outcomes, entirely different principles. Association is encoded by any synapse, and since membrane receptors are plastic, associative learning is a restricted derivative of difference learning.

Epigenetics and memory

Epigenetic modification is a powerful mechanism for the induction, the expression and persistence of long-term memory.

For long-term memory, we need to consider diverse cellular processes. These occur in neurons from different brain regions (in particular hippocampus, cortex, amygdala) during memory consolidation and recall. For instance, long-term changes in kinase expression in the proteome, changes in receptor subunit composition and localization at synaptic/dendritic membranes, epigenetic modifications of chromatin such as DNA methylation and histone methylation in the nucleus, and the posttranslational modifications of histones, including phosphorylation and acetylation, all these play a role. Histone acetylation is of particular interest because a number of known medications exist, which function as histone deacetylase inhibitors (HDACs), i.e. have a potential to increase DNA transcription and memory (more on this in a later post).

Epigenetic changes are important because they define the internal conditions for plasticity for the individual neuron. They underlie for instance, kinase or phosphatase-mediated (de)activations of enzymatic proteins and therefore influence the probability of membrane proteins to become altered by synaptic activation.

Among epigenetic changes, DNA methylation typically acts to alter, often to repress, DNA transcription at cytosine, or CpG islands in vertebrates. DNA methylation is mediated by enzymes such as Tet3, which catalyses an important step in the demethylation of DNA. In dentate gyrus of live rats, it was shown that the expression of Tet3 is greatly increased by LTP – synaptically induced memorization – , suggesting that certain DNA stretches were demethylated [5], and presumably activated. During induction of LTP by high frequency electrical stimulation, DNA methylation is changed specifically for certain genes known for their role in neural plasticity [1]. The expression of neural plasticity genes is widely correlated with the methylation status of the corresponding DNA .

So there is interesting evidence for filtering the induction of plasticity via the epigenetic landscape and modifiers of gene expression, such as HDACs. Substances which act as histone deacetylase inhibitors (HDACs) increase histone acetylation. An interesting result from research on fear suggests that HDACs increase some DNA transcription, and enhance specifically fear extinction memories [2], [3],[4]. 

Transmission is not Adaptation

Current synaptic plasticity models have one decisive property which may not be biologically adequate, and which has important repercussions on the type of memory and learning algorithms in general that can be implemented: Each processing or transmission event is an adaptive learning event.

In contrast, in biology, there are many pathways that may act as filters from the use of a synapse to the adaptation of its strength. In LTP/LTD, typically 20 minutes are necessary to observe the effects. This requires the activation of intracellular pathways, often co-occurence of a GPCR activation, and even nuclear read-out.

Therefore we have suggested a different model, greatly simplified at first to test its algorithmic properties. We include intrinsic excitability in learning (LTP-IE, LTD-IE). The main innovation is that we separate learning or adaptation from processing or transmission. Transmission events leave traces at synapses and neurons that disappear over time (short-term plasticity), unless they add up over time to unusually high (low) neural activations, something that can be determined by threshold parameters. Only if a neuron engages in a high (low) activation-plasticity event we get long-term plasticity at both neurons and synapses, in a localized way. Such a model is in principle capable of operating in a sandpile fashion. We do not know yet what properties the model may exhibit. Certain hypotheses exist, concerning abstraction and compression of a sequence of related inputs, and the development of an individual knowledge.

The LTP/LTD hypothesis of memory storage

In a classical neural network, where storage relies only on synapses, all memory is always present. Synaptic connections have specific weights, and any processing event uses them with all of the memory involved.

It could be of course that in a processing event only small regions of the brain network are being used, such that the rest of the connections form a hidden structure, a reservoir, a store or repository of unused information. Such models exist.

There is a real issue with stable pattern storage of a large number of patterns with a realistic amount of interference in a synaptic memory model. Classification, such as recognition of a word shape can be done very well, but storing 10,000 words and using them appropriately seems difficult. Yet that is still a small vocabulary for a human memory.

Another solution is conditional memory, i.e. storage that is only accessed when activated, which otherwise remains silent. Neurons offer many possibilities for storing memory other than at the strength of a synapse, and it should be worthwhile investigating if any of this may be exploited in a theoretical model.