Memory and the Volatility of Spines

Memory has a physical presence in the brain, but there are no elements which permanently code for it.

Memory is located – among other places – in dendritic spines. Spines are being increased during learning and they carry stimulus or task-specific information. Ablation of spines destroys this information (Hayashi-Takagi A2015). Astrocytes have filopodia which are also extended and retracted and make contact with neuronal synapses. The presence of memory in the spine fits to a neuron-centric view: Spine protrusion and retraction are guided by cellular programs. A strict causality such that x synaptic inputs cause a new spine is not necessarily true, as a matter of fact highly conditional principles of spine formation or dissolution could hold, where the internal state of the neuron and the neuron’s history matters. The rules for spine formation need not be identical to the rules for synapse formation and weight updating (which depend on at least two neurons making contact).

A spine needs to be there for a synapse to exist (in spiny neurons), but once it is there, clearly not all synapses are alike. They differ in the amount of AMPA presence and integration, and other receptors/ion channels as well. For instance, Sk-channels serve to block off a synapse from further change, and may be regarded as a form of overwrite protection. Therefore, the existence or lack of a spine is the first-order adaptation in a spiny neuron, the second-order adaptation involves the synapses themselves.

However, spines are also subject to high variability, on the order of several hours to a few days. Some elements may have very long persistence, months in the mouse, but they are few. MongilloGetal2017 point out the fragility of the synapse and the dendritic spine in pyramidal neurons and ask what this means for the physical basis of memory. Given what we know about neural networks, for memory to be permanent, is it necessary that the same spines remain? Learning allows to operate with many random elements, but memory has prima facie no need for volatility.

It is most likely that memory is a secondary, ’emergent’ property of volatile and highly adaptive structures. From this perspective it is sufficient to keep the information alive, among the information-carrying units, which will recreate it in some form.

The argument is that the information is redundantly coded. So if part of the coding is missing, the rest still carries enough information to inform the system, which recruits new parts to carry the information. The information is never lost, because not all synapses, spines, neurons are degraded at the same time, and because internal reentrant processing keeps the information alive and recreates new redundant parts at the same time as other parts are lost. It is a dynamic cycling of information. There are difficulties, if synapses are supposed to carry the whole information. The main difficulty is: if all patterns at all times are being stored in synaptic values, without undue interference, and with all the complex processes of memory, forgetting, retrieval, reconsolidation etc., can this be fitted to a situation, where the response to a simple visual stimulus already involves 30-40% of the cortical area where there is processing going on? I have no quantitative model for this. I think the model only works if we use all the multiple, redundant forms of plasticity that the neuron possesses: internal states, intrinsic properties, synaptic and morphological properties, axonal growth, presynaptic plasticity.

Balanced Inhibition/Excitation (2) – The I/E ratio

Some time ago, I suggested that the theoretical view on balanced inhibition/excitation (in cortex and cortical models) is probably flawed. I suggested that we have a loose regulation instead, where inhibition and excitation can fluctuate independently.

The I/E balance stems from the idea that the single pyramidal neuron should receive approximately equal strength of inhibition and excitation, in spite of the fact that only 10-20% of neurons in cortex are inhibitory (Destexhe2003, more on that below). Experimental measurements have shown that this conjecture is approximately correct, i.e. inhibitory neurons make stronger contacts, or their influence is stronger relative to excitatory inputs.

The E-I balance in terms of synaptic drive onto a single pyramidal neuron is an instance of antagonistic regulation which allows gear-shifting of inputs, and in this case, allows very strong inputs to be downshifted by inhibition to a weaker effect on the membrane potential. What is the advantage of such a scheme? Strong signals are less prone to noise and uncertainty than weak signals. Weak signals are filtered out by the inhibitory drive. Strong signals allow unequivocal signal transmission, whether excitatory synaptic input, (or phasic increases of dopamine levels, in other contexts), which are then gear-shifted down by antagonistic reception. There may also be a temporal sequence: a strong signal is followed by a negative signal to restrict its time course and reduce impact. In the case of somatic inhibition following dendritic excitation the fine temporal structure could work together with the antagonistic gear-shifting exactly for this goal. Okun and Lampl, 2008 have actually shown that inhibition follows excitation by several milliseconds.

But what are the implications for an E/I network, such as cortex?

Here is an experimental result:

During both task and delay, mediodorsal thalamic (MD) neurons have 30-50% raised firing rates, fast-spiking (FS) inhibitory cortical neurons have likewise 40-60% raised firing rates, but excitatory (regular-spiking, RS) cortical neurons are unaltered. Thus there is an intervention possible, by external input from MD, probably directly to FS neurons, which does not affect RS neuron rate at all (fig. a and c, SchmittLIetal2017)

Untitled 1

Mediodorsal thalamic stimulation raises inhibition, but leaves excitation unchanged.

At the same time, in this experiment, the E-E connectivity is raised (probably by some form of short-term synaptic potentiation), such that E neurons receive more input, which is counteracted by more inhibition. (cf. also Hamilton, L2013). The balance on the level of the single neuron would be kept, but the network exhibits only loose regulation of the I/E ratio: unilateral increase of inhibition.

There are several studies which show that it is possible to raise inhibition and thus enhance cognition, for instance in the mPFC of CNTNAP2 (neurexin, a cell adhesion protein) deficient mice, which have abnormally raised excitation, and altered social behavior (SelimbeyogluAetal2017, cf. Foss-FeigJ2017 for an overview). Also, inhibition is necessary to allow critical period learning – which is hypothesized to be due to a switch from internally generated spontaneous activity to external sensory perception (ToyoizumiT2013) – in line with our suggestion that the gear-shifting effect of locally balanced I/E allows only strong signals to drive excitation and spiking and filters weak, internally generated signals.

 

Balanced Inhibition-Excitation

Another idea that I consider ill-conceived is the notion that neural networks need to have balanced inhibition-excitation. This means that with every rise (or fall) of overall excitation of the network, inhibition has to closely match it.

On the one hand, this looks like a truism: excitation activates inhibitory neurons and therefore larger excitation means larger inhibition, which reduces excitation. However, the idea in the present form stems from neural modeling: conventional neural networks with their uniform neurons and dispersed connectivity easily either stop spiking because of a lack of activity, or spike at very high rates and ‘fill up’ the whole network to capacity. It is difficult to tune them to the space in-between, and difficult to keep them in this space. Therefore it was postulated that biological neural networks face the same problem and that here also excitation and inhibition need to be closely matched.

First of all inhibition is not simple. Inhibitory-inhibitory interactions make the simplistic explanation unrealistic, and the many different types of inhibitory neurons that have evolved again make it difficult to implement the balanced inhibition-excitation concept.

Secondly, more evolved and realistic neural networks do not face the tuning problem, they are resilient even with larger and smaller differences between inhibition and excitation.

Finally, there are a number of experimental findings showing that it is possible to tune inhibition in the absence of tuning excitation. In a coupled negative feedback model this simply means that the equilibrium values change. But some excitatory neurons may evolve strong activity without directly increasing their own inhibition. Inhibition needs not to be uniformly coupled to excitation, if a network can tolerate fairly large fluctuations in excitation.

Ubiquitous interneurons may still be responsible for guarding lower and upper levels of excitation (‘range-control’). This range may still be variably positioned.

In the next post I want to discuss an interesting form of regulation of inhibitory neurons, which also does not fit well  with the concept of balanced inhibition-excitation.

 

Dopamine and Neuromodulation

Some time ago, I suggested that equating dopamine with reward learning was a bad idea. Why?
First of all, because it is a myopic view of the role of neuromodulation in the brain, (and also in invertebrate animals). There are at least 4 centrally released neuromodulators, they all act on G-protein-coupled receptors (some not exclusively), and they all have effects on neural processing as well as memory. Furthermore there are myriad neuromodulators which are locally  released, and which have similar effects, all acting through different receptors, but on the same internal pathways, activating G-proteins.

Reward learning means that reward increases dopamine release, and that increased dopamine availability will increase synaptic plasticity.

That was always simplistic and like any half-truth misleading.

Any neuromodulator is variable in its release properties. This results from the activity of its NM-producing neurons, such as in locus ceruleus, dorsal raphe, VTA, medulla etc., which receive input, including from each other, and secondly from control of axonal and presynaptic release, which is independent of the central signal. So there is local modulation of release. Given a signal which increases e.g. firing in the VTA, we still need to know which target areas are at the present time responsive, and at which synapses precisely the signal is directed. It depends on the local state of the network, how the global signal is interpreted.

Secondly, the activation of G-protein coupled receptors is definitely an important ingredient in activating the intracellular pathways that are necessary for the expression of plasticity. Roughly, a concurrent activation of calcium and cAMP/PKA (within 10s or so) has been found to be supportive or necessary of inducing synaptic plasticity. However, dopamine, like the other centrally released neuromodulators, acts through antagonistic receptors, increasing or decreasing PKA, increasing or reducing plasticity. It is again local computation which will decide the outcome of NM signaling at each site.

So, is there a take-home message, rivaling the simplicity of dopamine=reward?

NMs alter representations (=thought) and memorize them (=memory) but the interpretation is flexible at local sites (=learn and re-learn).

Dopamine alters thought and memory in a way that can be learned and re-learned.

Back in 1995 I came up with the idea of analysing neuromodulators like dopamine as a method of introducing global parameters into neural networks, which were considered at the time to admit only local, distributed computations. It seemed to me then, as now, that the capacity for global control of huge brain areas (serotonergic, cholinergic, dopaminergic and noradrenergic systems), was really what set neuromodulation apart from the neurotransmitters glutamate and GABA. There is no need to single out dopamine as the one central signal, which induces simple increases in its target areas, when in reality changes happen through antagonistic receptors, and there are many central signals.  Also, the concept of hedonistic reward is badly defined and essentially restricted to Pavlovian conditioning for animals and addiction in humans.

Since the only known global parameter in neural networks at the time occurred in reinforcement learning, some people created a match, using dopamine as the missing global reinforcement signal (Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997). That could not work, because reinforcement learning requires proper discounting within a decision tree. But the idea stuck. Ever since I have been upset at this primitive oversimplification. Bad ideas in neuroscience.

Scheler, G and Fellous, J-M: Dopamine modulation of prefrontal delay activity- reverberatory activity and sharpness of tuning curves.  Neurocomputing, 2001.

Scheler, G. and Schumann, J: Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance. Proceedings of the International Joint Conference on Neural Networks 2003, Volume: 1. DOI: 10.1109/IJCNN.2003.1223347

Bad Ideas in Neuoscience

balanced excitation inhibition

dopamine=reward learning

hidden layers

explaining attention by top-down and bottom-up processes

I should collect some more. Why are they bad? Because they are half-truths. There is “something” right about these ideas, but as scientific concepts, the way they are currently defined, I think they are wrong. Need to be replaced.

The LTP/LTD hypothesis of memory storage

In a classical neural network, where storage relies only on synapses, all memory is always present. Synaptic connections have specific weights, and any processing event uses them with all of the memory involved.

It could be of course that in a processing event only small regions of the brain network are being used, such that the rest of the connections form a hidden structure, a reservoir, a store or repository of unused information. Such models exist.

There is a real issue with stable pattern storage of a large number of patterns with a realistic amount of interference in a synaptic memory model. Classification, such as recognition of a word shape can be done very well, but storing 10,000 words and using them appropriately seems difficult. Yet that is still a small vocabulary for a human memory.

Another solution is conditional memory, i.e. storage that is only accessed when activated, which otherwise remains silent. Neurons offer many possibilities for storing memory other than at the strength of a synapse, and it should be worthwhile investigating if any of this may be exploited in a theoretical model.

Neural Coding

To understand neural coding, we have to regard the relationship of synchronized membrane potentials (local field potentials) and the firing of the individual neuron. We have two different processes here, because the firing of the single neuron is not determined simply by the membrane potential exceeding a fixed threshold. Rather, the membrane potential’s fluctuation does not predict the individual neuron’s firing, because the neuron has a dynamic, flexible firing threshold that is determined by its own internal parameters. Also, the membrane potential is subject to synchronization by direct contact between membranes, it is not necessarily or primarily driven by synaptic input or neuronal spiking. Similarly, HahnG2014(Kumar) have noted that membrane synchronization cannot be explained from a spiking neural network.
The determination of an individual neuron’s firing threshold is a highly dynamic process, i.e. the neuron constantly changes its conditions for firing without necessarily disrupting its participation in ongoing membrane synchronization processes. In other words, membrane potential fluctuations are determined by synaptic input as well as local synchronization processes, and spikes depend on membrane potentials filtered by a dynamic, individually adjustable firing threshold.

coding1

The model for a neural coding device contains the following:

A neuronal membrane that is driven by synaptic input and synchronized by local interaction (both excitatory and inhibitory)
A spiking threshold with internal dynamics, possibly within an individual range, which determines spiking from membrane potential fluctuations.

In this model the neural code is determined by at least three factors: synaptic input, local synchronization, and firing threshold value. We may assume that local synchronization acts as a filter for signal loss, i.e. it unifies and diminishes differences in synaptic input. Firing thresholds act towards individualization, adding information from stored memory. The whole set-up acts to filter the synaptic input pattern towards a more predictable output pattern.