What’s wrong with modeling?

A typical task in theoretical neuroscience is “modeling”, for instance, building and analysing network models of cerebellar cortex that incorporate the diversity of cell types and synaptic connections observed in this structure. The goal is to better understand the functions of these diverse cell types and connections. Approaches from statistical physics, nonlinear dynamics and machine learning can be used, and models should be “ constrained “ by electrophysiological, transcriptomic and connectomic data.

This sounds all very well and quite innocuous. It’s “state of the art”. So what is wrong with this approach that dominates current computational neuroscience?

What is wrong is the belief that a detailed, bottom-up model could fulfill the goal of understanding its function. This is only a way to synthesize existing information. Many, most aspects of the model are selected in advance, and much existing information is left out, since it is irrelevant for the model. Irrelevant for the model does not mean that it may not be crucial for the function of the real biological object, such as a neuron. For instance, for decades the fact that many neurons adapt in terms of their ion channels under behavioral learning was simply ignored. Then the notion of whole-neuron learning became acceptable, and under the term “intrinsic excitability” it slowly became part of computational modeling, and now various functions are being discovered, where those changes were first dismissed as “homeostatic” , i.e. only functions in terms of house-keeping were accepted.

If we started a top-down model in terms of what makes sense (what would evolution prefer), and what is logically required or useful for the neuron to operate, then we would have realized a long time ago that whole neuron (intrinsic) excitability is a crucial part of learning. This paper was first published on arxiv 2005, but accepted for publication only in 2013, when intrinsic excitability had become more widely known.

The next paradigm: AI, Neural Networks, the Symbolic Brain

I don’t think neural networks are still in their infancy, I think they are moribund. Some say, they hit a roadblock. AI used to be based on rules, symbols and logic. Then probability and statistics came along, and neural networks were created as an efficient statistical paradigm. But unfortunately, the old AI (GOFAI) was discontinued, it was replaced by statistics. Since then ever more powerful computers and ever more data came along, therefore statistics, neural networks (NN), machine learning seemed to create value, even though they were based on simple concepts from the late eighties, early nineties or even earlier. We failed to develop, success was easy. Some argue for hybrid models, combining GOFAI and NN. But that was tried early on, and it wasn’t very successful. What we now need is a new, and deeper understanding of what the brain actually does. Because it obviously does symbol manipulation, it does logic, it does math. Most importantly, we humans learned to speak, using nothing better than a mammalian brain (with a few specializations). I believe there is a new paradigm out there which can fulfill these needs: language, robotics, cognition, knowledge creation. I call it the vertical-horizontal model: a model of neuron interaction, where the neuron is a complete microprocessor of a very special kind. This allows to build a symbolic brain. There will be a trilogy of papers to describe this new paradigm, and a small company to build the necessary concepts. At the present time, here is a link to an early draft, hard to read, not a paper, more a collection right now, but ready for feedback at my email! I’ll soon post a summary here as well.

Balanced Inhibition/Excitation (2) – The I/E ratio

Some time ago, I suggested that the theoretical view on balanced inhibition/excitation (in cortex and cortical models) is probably flawed. I suggested that we have a loose regulation instead, where inhibition and excitation can fluctuate independently.

The I/E balance stems from the idea that the single pyramidal neuron should receive approximately equal strength of inhibition and excitation, in spite of the fact that only 10-20% of neurons in cortex are inhibitory (Destexhe2003, more on that below). Experimental measurements have shown that this conjecture is approximately correct, i.e. inhibitory neurons make stronger contacts, or their influence is stronger relative to excitatory inputs.

The E-I balance in terms of synaptic drive onto a single pyramidal neuron is an instance of antagonistic regulation which allows gear-shifting of inputs, and in this case, allows very strong inputs to be downshifted by inhibition to a weaker effect on the membrane potential. What is the advantage of such a scheme? Strong signals are less prone to noise and uncertainty than weak signals. Weak signals are filtered out by the inhibitory drive. Strong signals allow unequivocal signal transmission, whether excitatory synaptic input, (or phasic increases of dopamine levels, in other contexts), which are then gear-shifted down by antagonistic reception. There may also be a temporal sequence: a strong signal is followed by a negative signal to restrict its time course and reduce impact. In the case of somatic inhibition following dendritic excitation the fine temporal structure could work together with the antagonistic gear-shifting exactly for this goal. Okun and Lampl, 2008 have actually shown that inhibition follows excitation by several milliseconds.

But what are the implications for an E/I network, such as cortex?

Here is an experimental result:

During both task and delay, mediodorsal thalamic (MD) neurons have 30-50% raised firing rates, fast-spiking (FS) inhibitory cortical neurons have likewise 40-60% raised firing rates, but excitatory (regular-spiking, RS) cortical neurons are unaltered. Thus there is an intervention possible, by external input from MD, probably directly to FS neurons, which does not affect RS neuron rate at all (fig. a and c, SchmittLIetal2017)

Untitled 1

Mediodorsal thalamic stimulation raises inhibition, but leaves excitation unchanged.

At the same time, in this experiment, the E-E connectivity is raised (probably by some form of short-term synaptic potentiation), such that E neurons receive more input, which is counteracted by more inhibition. (cf. also Hamilton, L2013). The balance on the level of the single neuron would be kept, but the network exhibits only loose regulation of the I/E ratio: unilateral increase of inhibition.

There are several studies which show that it is possible to raise inhibition and thus enhance cognition, for instance in the mPFC of CNTNAP2 (neurexin, a cell adhesion protein) deficient mice, which have abnormally raised excitation, and altered social behavior (SelimbeyogluAetal2017, cf. Foss-FeigJ2017 for an overview). Also, inhibition is necessary to allow critical period learning – which is hypothesized to be due to a switch from internally generated spontaneous activity to external sensory perception (ToyoizumiT2013) – in line with our suggestion that the gear-shifting effect of locally balanced I/E allows only strong signals to drive excitation and spiking and filters weak, internally generated signals.

 

Dendritic computation

A new paper  Universal features of dendrites through centripetal branch ordering published: July 3, 2017) shows more or less the opposite of what it cites as common wisdom: „neuronal computation is known to depend on the morphology of dendrites”

Namely, since all dendrites follow general topological principles, it is probably not the dendritic morphology that matters in a functional sense. To make a dendrite functional, i.e. let it participate in adaptive information processing, we have to refer to the ion channels and GPCRs that populate the spines and shafts and shape the generation of action potentials.

Compare:
Dendritic integration: 60 years of progress. (Stuart GJ, Spruston N.) Nat Neurosci. 2015 Dec;18(12):1713-21. doi: 10.1038/nn.4157. Epub 2015 Nov 25. Review. PMID:26605882.

Plasticity of dendritic function. Magee JC, Johnston D. Curr Opin Neurobiol. 2005 Jun;15(3):334-42. Review. PMID:15922583

Gabriele Scheler BMC Neurosci. 2013; 14(Suppl 1): P344. Published online 2013 Jul 8. doi: 10.1186/1471-2202-14-S1-P344. PMCID: PMC3704850

Neuromodulation of circuits with variable parameters: single neurons and small circuits reveal principles of state-dependent and robust neuromodulation. Marder E1, O’Leary T, Shruti S. Annu Rev Neurosci. 2014;37:329-46. doi: 10.1146/annurev-neuro-071013-013958.

Bad Ideas in Neuroscience

balanced excitation inhibition

dopamine=reward learning

hidden layers

explaining attention by top-down and bottom-up processes

I should collect some more. Why are they bad? Because they are half-truths. There is “something” right about these ideas, but as scientific concepts, the way they are currently defined, I think they are wrong. Need to be replaced.

Neural Coding

To understand neural coding, we have to regard the relationship of synchronized membrane potentials (local field potentials) and the firing of the individual neuron. We have two different processes here, because the firing of the single neuron is not determined simply by the membrane potential exceeding a fixed threshold. Rather, the membrane potential’s fluctuation does not predict the individual neuron’s firing, because the neuron has a dynamic, flexible firing threshold that is determined by its own internal parameters. Also, the membrane potential is subject to synchronization by direct contact between membranes, it is not necessarily or primarily driven by synaptic input or neuronal spiking. Similarly, HahnG2014(Kumar) have noted that membrane synchronization cannot be explained from a spiking neural network.
The determination of an individual neuron’s firing threshold is a highly dynamic process, i.e. the neuron constantly changes its conditions for firing without necessarily disrupting its participation in ongoing membrane synchronization processes. In other words, membrane potential fluctuations are determined by synaptic input as well as local synchronization processes, and spikes depend on membrane potentials filtered by a dynamic, individually adjustable firing threshold.

coding1

The model for a neural coding device contains the following:

A neuronal membrane that is driven by synaptic input and synchronized by local interaction (both excitatory and inhibitory)
A spiking threshold with internal dynamics, possibly within an individual range, which determines spiking from membrane potential fluctuations.

In this model the neural code is determined by at least three factors: synaptic input, local synchronization, and firing threshold value. We may assume that local synchronization acts as a filter for signal loss, i.e. it unifies and diminishes differences in synaptic input. Firing thresholds act towards individualization, adding information from stored memory. The whole set-up acts to filter the synaptic input pattern towards a more predictable output pattern.