Memory and the Volatility of Spines

Memory has a physical presence in the brain, but there are no elements which permanently code for it.

Memory is located – among other places – in dendritic spines. Spines are being increased during learning and they carry stimulus or task-specific information. Ablation of spines destroys this information (Hayashi-Takagi A2015). Astrocytes have filopodia which are also extended and retracted and make contact with neuronal synapses. The presence of memory in the spine fits to a neuron-centric view: Spine protrusion and retraction are guided by cellular programs. A strict causality such that x synaptic inputs cause a new spine is not necessarily true, as a matter of fact highly conditional principles of spine formation or dissolution could hold, where the internal state of the neuron and the neuron’s history matters. The rules for spine formation need not be identical to the rules for synapse formation and weight updating (which depend on at least two neurons making contact).

A spine needs to be there for a synapse to exist (in spiny neurons), but once it is there, clearly not all synapses are alike. They differ in the amount of AMPA presence and integration, and other receptors/ion channels as well. For instance, Sk-channels serve to block off a synapse from further change, and may be regarded as a form of overwrite protection. Therefore, the existence or lack of a spine is the first-order adaptation in a spiny neuron, the second-order adaptation involves the synapses themselves.

However, spines are also subject to high variability, on the order of several hours to a few days. Some elements may have very long persistence, months in the mouse, but they are few. MongilloGetal2017 point out the fragility of the synapse and the dendritic spine in pyramidal neurons and ask what this means for the physical basis of memory. Given what we know about neural networks, for memory to be permanent, is it necessary that the same spines remain? Learning allows to operate with many random elements, but memory has prima facie no need for volatility.

It is most likely that memory is a secondary, ’emergent’ property of volatile and highly adaptive structures. From this perspective it is sufficient to keep the information alive, among the information-carrying units, which will recreate it in some form.

The argument is that the information is redundantly coded. So if part of the coding is missing, the rest still carries enough information to inform the system, which recruits new parts to carry the information. The information is never lost, because not all synapses, spines, neurons are degraded at the same time, and because internal reentrant processing keeps the information alive and recreates new redundant parts at the same time as other parts are lost. It is a dynamic cycling of information. There are difficulties, if synapses are supposed to carry the whole information. The main difficulty is: if all patterns at all times are being stored in synaptic values, without undue interference, and with all the complex processes of memory, forgetting, retrieval, reconsolidation etc., can this be fitted to a situation, where the response to a simple visual stimulus already involves 30-40% of the cortical area where there is processing going on? I have no quantitative model for this. I think the model only works if we use all the multiple, redundant forms of plasticity that the neuron possesses: internal states, intrinsic properties, synaptic and morphological properties, axonal growth, presynaptic plasticity.

Theories, Models and Data

In the modern world, a theory is a mathematical model, and a mathematical model is a theory. A theory described in words is not a theory, it is an explanation or an opinion.

The interesting thing about mathematical models is that they go far beyond data reproduction. A theoretical model of a biological structure or process may be entirely hypothetical, or it may use a certain amount of quantitative data from experiments, integrate it into a theoretical framework and ask questions that result from the combined model.

A Bayesian model in contrast is a purely data-driven construct which usually requires additional quantitative values (‘priors’) which have to be estimated. A dynamical model of metabolic or protein signaling processes in the cell assumes only a simple theoretical structure, kinetic rate equations, and then proceeds to fill the model with data (many estimated) and analyses the results. A neural network model takes a set of data and performs a statistical analysis to cluster the patterns for similarity, or to assign new patterns to previously established categories. Similarly, high-throughput or other proteomic data are usually analysed for outliers and variance with statistical significance with respect to a control data set. Graph analysis of large-scale datasets for a cell type, brain regions, neural connections etc. also aim to reproduce the dataset, to visualize it, and to provide quantitative and qualitative measures of the resulting natural graph.
All these methods primarily attempt to reproduce the data, and possibly make predictions concerning missing data or the behavior of a system that is created from the dataset.

Theoretical models can do more.

A theoretical model can introduce a hypothesis on how a biological system functions, or even, how it ought to function. It may not even need detailed experimental data, i.e. experiments and measurements, but it certainly needs observations and outcomes. It should be specific enough to spur new experiments in order to verify the hypothesis.
In contrast to Popper, a hypothetical model should not be easily falsifiable. If that were the case, it would probably be an uninteresting, highly specific model, for which experiments can be easily performed to falsify the model. A theoretical model should be general enough to explain many previous observations and open up possibilities for many new experiments, which support, modify and refine the model. The model may still be wrong, but at least it is interesting.
It should not be easy to decide which of several hypothetical models covers the complex biological reality best. But if we do not have models of this kind, and level of generality, we cannot guide our research towards progress in answering pressing needs in society, such as in medicine. We then have to work with old, outdated models and are condemned to accumulate larger and larger amounts of individual facts for which there is no use. Those facts form a continuum without a clear hierarchy, and they become quickly obsolete and repetitive, unless they are stored in machine-readable format, where they become part of data-driven analysis, no matter their quality and significance. In principle, such data can be accumulated and rediscovered by theoreticians which look for confirmation of a model. But they only have significance after the model exists.

Theories are created, they cannot be deduced from data.