What is the most important untested computational prediction in neuroscience?

The Organization for Computational Neuroscience has started a survey, asking people for their submissions, and here is my contribution:

Prediction: The basis for learning and memory exists primarily within the single neuron.

Rationale: (A) Dendrites/axons are adaptive, in particular the expression and contribution of ion channels adapts to use. This also extends to synaptic channels. (B) The decision on transforming a transient calcium signal into a permanent trace lies within the single neuron, within its protein signaling network and DNA readout mechanisms. The neuron’s memory traces are both use-dependent (dependent on shape and size of calcium signals received) and subject to additional internal computations, e.g. involving kinases/phosphatases, early genes, histones etc.

Remote memories are not coded by current synaptic connectivity, but internally by clusters of neurons, which become activated under certain conditions.

Conclusion: Memory research has to focus on the cellular (neuronal) basis of adaptation, synaptic connectivity will be predictable from adequate neuron models.

The statement in italics is extra. I have no papers, no references on that. For the rest, cf.

Scheler G. Regulation of neuromodulator receptor efficacy–implications for whole-neuron and synaptic plasticity. Prog Neurobiol. 2004 Apr;72(6):399-415. PMID: 15177784

Scheler G. Learning intrinsic excitability in medium spiny neurons. F1000Res. 2013 Mar 14 2:88. doi: 10.12688/f1000research.2-88.v2. eCollection 2013. PMID: 25520776

Scheler, G: Logarithmic distributions prove that intrinsic learning is Hebbian. F1000Res. 2017 (August).

 

Input-dependent vs. Internally guided Memory

 wealth of experimental data supports the idea that synaptic transmission can be potentiated or depressed, depending on neuronal stimulation, and that this change of at the synapse is a long-lasting effect responsible for learning and memory (LTP/LTD paradigm).What is new is to assume that synaptic and neural plasticity also requires conditions which are not input-dependent (i.e. dependent on synaptic stimulation, or even neuromodulatory activation) but instead dependent on the ‚internal state‘ of the pre- or postsynaptic neuron.

Under such a model, for plasticity to happen after stimulation it must meet with a readiness on the part of the cell, the presynapse or the postsynaptic site. We may call this conditional plasticity to emphasize that conditions deriving from the internal state of the neural network must be met in order for plasticity to occur. A neural system with conditional plasticity will have different properties from current neural networks which use unconditional plasticity in response to every stimulation. The known Hebbian neural network problem  of  ‘preservation of the found solution’ should become much easier to solve with conditional plasticity. Not every activation of a synapse leaves a trace. Most instances of synaptic transmission have no effect on memory. Existing synaptic strengths in many cases remain unaltered by transmission (use of the synapse), unless certain conditions are met. Those conditions could be an unusual strength of stimulation of a single neuron, a temporal sequence of (synaptic and neuromodulatory) stimulations that match a conditional pattern, a preconditioned high readiness for plastic change, e.g. by epigenetics.  However, it is an open question whether using conditional plasticity in neural network models will help with the tasks of conceptual abstraction and knowledge representation in general, beyond improved memory stability.

The ‚readiness‘ of the cell for plastic changes is a catch-all term for a very complex internal processing scheme. There is, for instance, the activation state of relevant kinases in protein signalling, such as CAMKII, PKA, PKC, and ERK, the NF-κB family of transcription factors, CREB, BDNF, factors involved in glucocorticoid signaling, c-fos etc., which can all be captured by dynamical models (Refs), but which generate prohibitively complex models with hundreds of species involved and little opportunity for generalization. That is not even sufficient. There are epigenetic effects like histone modification, which play a role in transcription, and which would require a potentially large number of data to be dynamically modeled as well. However it is known that non-specific drugs like HDAC inhibitors, enhancing gene transcription via increasing histone acetylation, improve learning in general.

This underscores the idea that neurons may receive a ‘readiness potential’ or threshold, a numeric value, which indicates the state of the cell as its ability to engage in plasticity events. Plasticity events originate from the membrane, by strong NMDA or L-type channel mediated calcium influx. Observations have shown that pharmacological blockade of L-type VSCCs as well as chelation of calcium in close proximity to the plasma membrane inhibits immediate-early gene induction, i.e. activation of cellular plasticity programs. If we model readiness as a threshold value, the strength of calcium input may be less than or more than this threshold to induce transcription plasticity. If we model readiness as a continuous value, we may integrate values over time to arrive at a potentially more accurate model.

To simplify we could start with a single scalar value of plasticity readiness, specific for each neuron, but dynamically evolving, and explore the theoretical consequences of a neural network with internal state variables. We‘d be free to explore various rules for the dynamic evolution of the internal readiness value.

How can we model neural plasticity?

A conjecture based on findings of plasticity in ion channel expression is that the level of expression of various ion channels reflects a memory of the cell. This means that we will have networks of neurons with slightly varying dendritic ion channel populations, which influence not only their general intrinsic excitability, but very specifically influence synaptic transmission through their position at synapses. Some channels aid with transmission, others block or reduce transmission, a phenomenon which has been studied as short-term facilitation or depression. Furthermore, ion channels at the synapse have an influence on synaptic plasticity, again, supporting or blocking plastic changes at the synapse.

This makes a model of neural plasticity more complex than synaptic input-dependent LTP/LTD. A single neuron would need variables at synaptic positions for AMPA and NMDA but also for the main potassium and calcium channels (K-A, Kir, Sk, HCN, L-Ca). In addition, a single set of intrinsic variables could capture the density of ion channels in dendritic shaft position. These variables would allow to express the neural diversity which is a result of memorization or learning.

Hypothesis: Neural plasticity is not primarily input-dependent, instead it is guided by a neural internal state which reflects network processes of knowledge building.

How would the variables that define a neural network be learned? The intrinsic variables would be set by neuromodulatory activation and the internal state of the neuron. The synaptic variables would be set from synaptic activation, from neuromodulatory activation, and also from an internal state‘. (In addition, the significant spill-over from synaptic activation which reaches other synapses via the dendrite should also be modeled.)

Let us assume that synaptic stimulation and neuromodulatory activation are well understood. What is the internal state‘ of a neuron?

It has been shown that epigenetic modifications are an important factor in memory. These involve methylation changes in DNA and alterations in histones. Their activation is mediated by protein signaling pathways, encompassing kinases like PKA, PKC, CaMKII, MAPK, and other important protein signaling hubs. Long-term neural plasticity and behavioral memory only happen, when the internal conditions are favorable, many reductions or disruptions of internal processes prevent plasticity and behavioral memory. We do not have the data yet to model these processes in detail. But we may use variables for a neuron‘s internal state which is facilitating or inhibiting plasticity. It is an important theoretical question then to understand the impact of internally-guided plasticity on a neural network, one hypothesis is that it helps with conceptual abstraction and knowledge building.

Hypothesis: Only a few neurons in a target area undergo the permanent, lasting changes underlying long-term memory.

Learning events in rodents lead to epigenetic changes in a targeted area, such as amygdala or hippocampus, but it seems as if there are only a few cells, neurons and non-neurons, involved at a time.These changes begin to appear 30 min after a high-frequency stimulation as in dentate gyrus and last 2-5 hours at least. Some have measured the effects 2 weeks after a learning event. Changes are not widespread, as would be expected in distributed memory, instead they are focal, as if only a few cells suffice to store the memory. It is also possible that the changes are strong only in a focal group of neurons, and present, but much weaker in a more distributed group. Focal learning may be seen as a strategy to build more effective knowledge representations, similar to dropout learning techniques which improve feature representation.

Dopamine and Neuromodulation

Some time ago, I suggested that equating dopamine with reward learning was a bad idea. Why?
First of all, because it is a myopic view of the role of neuromodulation in the brain, (and also in invertebrate animals). There are at least 4 centrally released neuromodulators, they all act on G-protein-coupled receptors (some not exclusively), and they all have effects on neural processing as well as memory. Furthermore there are myriad neuromodulators which are locally  released, and which have similar effects, all acting through different receptors, but on the same internal pathways, activating G-proteins.

Reward learning means that reward increases dopamine release, and that increased dopamine availability will increase synaptic plasticity.

That was always simplistic and like any half-truth misleading.

Any neuromodulator is variable in its release properties. This results from the activity of its NM-producing neurons, such as in locus ceruleus, dorsal raphe, VTA, medulla etc., which receive input, including from each other, and secondly from control of axonal and presynaptic release, which is independent of the central signal. So there is local modulation of release. Given a signal which increases e.g. firing in the VTA, we still need to know which target areas are at the present time responsive, and at which synapses precisely the signal is directed. It depends on the local state of the network, how the global signal is interpreted.

Secondly, the activation of G-protein coupled receptors is definitely an important ingredient in activating the intracellular pathways that are necessary for the expression of plasticity. Roughly, a concurrent activation of calcium and cAMP/PKA (within 10s or so) has been found to be supportive or necessary of inducing synaptic plasticity. However, dopamine, like the other centrally released neuromodulators, acts through antagonistic receptors, increasing or decreasing PKA, increasing or reducing plasticity. It is again local computation which will decide the outcome of NM signaling at each site.

So, is there a take-home message, rivaling the simplicity of dopamine=reward?

NMs alter representations (=thought) and memorize them (=memory) but the interpretation is flexible at local sites (=learn and re-learn).

Dopamine alters thought and memory in a way that can be learned and re-learned.

Back in 1995 I came up with the idea of analysing neuromodulators like dopamine as a method of introducing global parameters into neural networks, which were considered at the time to admit only local, distributed computations. It seemed to me then, as now, that the capacity for global control of huge brain areas (serotonergic, cholinergic, dopaminergic and noradrenergic systems), was really what set neuromodulation apart from the neurotransmitters glutamate and GABA. There is no need to single out dopamine as the one central signal, which induces simple increases in its target areas, when in reality changes happen through antagonistic receptors, and there are many central signals.  Also, the concept of hedonistic reward is badly defined and essentially restricted to Pavlovian conditioning for animals and addiction in humans.

Since the only known global parameter in neural networks at the time occurred in reinforcement learning, some people created a match, using dopamine as the missing global reinforcement signal (Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997). That could not work, because reinforcement learning requires proper discounting within a decision tree. But the idea stuck. Ever since I have been upset at this primitive oversimplification. Bad ideas in neuroscience.

Scheler, G and Fellous, J-M: Dopamine modulation of prefrontal delay activity- reverberatory activity and sharpness of tuning curves.  Neurocomputing, 2001.

Scheler, G. and Schumann, J: Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance. Proceedings of the International Joint Conference on Neural Networks 2003, Volume: 1. DOI: 10.1109/IJCNN.2003.1223347

Non-modifiable synapses

Technical papers sometimes make a distinction between modifiable and non-modifiable synapses. Results on NMDA-NR2A vs. NMDA-NR2B receptors show that a NR2B receptor is required for LTP (learning), and a synapse which contains only NMDA-NR2A and AMPA glutamatergic receptors is not modifiable (except maybe for some LTD).

http://www.ncbi.nlm.nih.gov/pubmed/20164351

“Thus, for LTP induction, the physical presence of NR2B and its cytoplasmic tail are more important than the activation of NR2B–NMDARs, suggesting an essential function of NR2B as a mediator of protein interactions independent of its channel contribution. In contrast, the presence of NR2A is not essential for LTP, and, in fact, the cytoplasmic tail of NR2A seems to inhibit the induction of LTP.”

“The apparent contradiction can be explained by the fact that NR2B, in addition to forming part of a ligand-gated channel, also has a long cytoplasmic tail that binds (directly or indirectly) to a variety of postsynaptic signaling molecules.”

“Our data cannot distinguish whether these collaborating NR2A and NR2B subunits are in the same (triheteromeric) NMDAR complex or in different NMDARs (NR2A–NMDARs and NR2B–NMDARs) that lie near each other.”

“We hypothesize that the NR2A subunit also has a dual function. On one hand, it acts as a channel that facilitates LTP by conducting calcium. On the other hand, it acts as a scaffold that presumably recruits a protein to the synapse that inhibits LTP. Such a protein could act by antagonizing the activation of LTP signaling pathways [e.g., synaptic Ras-GTPase-activating protein (Kim et al., 2005)] or by stimulating the LTD signaling pathways [such as Rap and p38 mitogen-activated protein kinase) (Thomas and Huganir, 2004; Zhu et al., 2005; Li et al., 2006)].”

“Consistent with this idea, changes in plasticity in the visual cortex are correlated with a change in the NR2A/NR2B ratio. Light deprivation lowers the threshold of induction for LTP and is associated with a decrease in synaptic NR2A.”

It is hard to prove a negative, a non-modifiable synapse.  Since NR2B-containing receptors often occur in extrasynaptic positions, stimulation protocols may also recruit them to the synapse.  Triheteromeric receptor complexes do occur, but they may be less common. Also, the role of the NR2D and NR2C subunits, which may also bind to NR1, and which may help in creating Nr2A-type only synapses, is less clear.

Nonetheless it may be justified to make a distinction between “hard-to-modify” synapses and easily modifiable synapses.

This is entirely different from the “silent synapses”, which, because they contain no AMPA receptors, do not contribute (much) to fast signal transmission, but which may be recruited to full synapses, because they do contain NMDA-Nr2B receptors.