Bad Ideas in Neuroscience

balanced excitation inhibition

dopamine=reward learning

hidden layers

explaining attention by top-down and bottom-up processes

I should collect some more. Why are they bad? Because they are half-truths. There is “something” right about these ideas, but as scientific concepts, the way they are currently defined, I think they are wrong. Need to be replaced.

The LTP/LTD hypothesis of memory storage

In a classical neural network, where storage relies only on synapses, all memory is always present. Synaptic connections have specific weights, and any processing event uses them with all of the memory involved.

It could be of course that in a processing event only small regions of the brain network are being used, such that the rest of the connections form a hidden structure, a reservoir, a store or repository of unused information. Such models exist.

There is a real issue with stable pattern storage of a large number of patterns with a realistic amount of interference in a synaptic memory model. Classification, such as recognition of a word shape can be done very well, but storing 10,000 words and using them appropriately seems difficult. Yet that is still a small vocabulary for a human memory.

Another solution is conditional memory, i.e. storage that is only accessed when activated, which otherwise remains silent. Neurons offer many possibilities for storing memory other than at the strength of a synapse, and it should be worthwhile investigating if any of this may be exploited in a theoretical model.

Neural Coding

To understand neural coding, we have to regard the relationship of synchronized membrane potentials (local field potentials) and the firing of the individual neuron. We have two different processes here, because the firing of the single neuron is not determined simply by the membrane potential exceeding a fixed threshold. Rather, the membrane potential’s fluctuation does not predict the individual neuron’s firing, because the neuron has a dynamic, flexible firing threshold that is determined by its own internal parameters. Also, the membrane potential is subject to synchronization by direct contact between membranes, it is not necessarily or primarily driven by synaptic input or neuronal spiking. Similarly, HahnG2014(Kumar) have noted that membrane synchronization cannot be explained from a spiking neural network.
The determination of an individual neuron’s firing threshold is a highly dynamic process, i.e. the neuron constantly changes its conditions for firing without necessarily disrupting its participation in ongoing membrane synchronization processes. In other words, membrane potential fluctuations are determined by synaptic input as well as local synchronization processes, and spikes depend on membrane potentials filtered by a dynamic, individually adjustable firing threshold.

coding1

The model for a neural coding device contains the following:

A neuronal membrane that is driven by synaptic input and synchronized by local interaction (both excitatory and inhibitory)
A spiking threshold with internal dynamics, possibly within an individual range, which determines spiking from membrane potential fluctuations.

In this model the neural code is determined by at least three factors: synaptic input, local synchronization, and firing threshold value. We may assume that local synchronization acts as a filter for signal loss, i.e. it unifies and diminishes differences in synaptic input. Firing thresholds act towards individualization, adding information from stored memory. The whole set-up acts to filter the synaptic input pattern towards a more predictable output pattern.

Local Adjustment of a Biochemical Reaction System

This is an explanation which refers to the paper
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0055762 (Fig.4).

Since the explanation was short at that point, here is a better way to explain it:

Image

The elementary psf results from using the kinetic parameters and executing a single reaction complex, i.e. one backward and one forward reaction. This is the minimal unit we need. For binding reactions this is A+B <->AB (forward kon, backward koff), for enzymatic reactions it is  A+E<->AE->A* and A*->A.(forward kon, backward koff, kcat and kcatd)

But in a system, every reaction is embedded. Therefore the elementary psf is changed. Example:
one species participates in two reactions and binds to two partners. The kinetic rate parameters for the binding reaction is the same, but some amount of the species is sucked up by the other reaction.
Therefore, if we look at the psf, its curve will be different, and we call this the systemic psf. It obviously depends on the collection of reactions, as a matter of fact on the collection of ALL connected reactions, what this psf will look like.

Now in practice, only a limited amount of “neighboring reactions” will have an effect. This has also been determined by other papers, i.e. the observation that local changes at one spot do not “travel” far.
Therefore we can now do a neat trick:

We look at a whole system, and focus in on a single psf, which means a systemic psf. Example:
GoaGTPO binds to AC5Ca and produces GoaGTPAC5Ca. In this system, the binding reaction is very weak. The curve over the range of GoaGTP (~10-30nM) goes from near 0 to maybe 5 nM at most. We may decide or have measured that we want to improve the model at this point. We may use data that indicate a curve going from about 10nM to about 50nM for the same input of GoaGTP (~10-30nM). Good thing is that we can define just such a curve using hyperbolic parameters. We have measured or want to place the curve such that ymax=220, and C=78, n=1.

So now we know what the systemic psf should be, but how do we get there? We adjust the underlying  kinetic rate parameters for this reaction and any neighboring reactions such that this systemic psf results (and the others do not change or very little).
This can  obviously be done by an iterative process

  • adjust the reaction itself first, (change kinetic rates)
  • then adjust every other reaction which has changed,(change kinetic rates)
  • and continue until the new goal is met and all other psfs are still the same.
  • Use reasonable error ranges to define “goal is met” and “psfs are the same”.

Without error ranges, I do not offer a proof that such a procedure will always converge. As a matter of fact I suspect it  may NOT always be possible. Therefore we need reasonable error ranges.
In practice, in most cases I believe 2,3, maybe 4 reactions are all that is affected, everything else will have such small adjustments that it is not worth touching. These functions remain very local. In the example given, only one other reaction was changed at all.

The decisive part is that we can often measure such a systemic psf, such a transfer function somewhere in the system, and therefore independently calibrate the system.

We measure the systemic psf, but we now have a procedure to force the system into matching this new measurement by adjusting kinetic rates, and using the psf parameters to define the intended, adjusted local transfer function.

In many cases, as in the given example, this allows to locally and specifically test and improve the system – this is novel, and it only works because we made the clear conceptual difference between kinetic rate parameters (which are elementary) and systemic psf parameters.

We do not derive kon vs. koff or the precise dynamics in this way.  For a binding reaction it is only the ratio koff/kon (=Kd) that matters, for an enzymatic reaction it is koff/kon and kcat/kcatd. There are multiple solutions. Dynamic matching may filter out which ones match not only the transfer function, but also the timing. This has not been addressed, because it would only be another filtering step.

The procedure outlined for local adjustment of a biochemical reaction system needs to be  implemented,and more experience gained on spread of local adjustments and reasonable error bounds.

AMP and cAMP

When the steady state level of cAMP rises, the AMP:ATP ratio in a cell also increases.

“In cardiomyocytes, β2-AR stimulation resulted in a reduction in ATP production but was accompanied by a rise in its precursor, AMP … The AMP/ATP ratio was enhanced …, which subsequently led to the activation of AMP-activated kinase (AMPK)….Lietal2010(JPhysiol).

This activates AMP kinase, which phosphorylates TSC2 and RAPTOR, a subcomplex of mTORC1, and de-activates mTORC1. mTORC1 is a protein complex that is activated by nutrients and growth factors, and it is of importance in neurodegeneration. Together with PDK1, it activates S6K1, which stimulates protein synthesis by the ribosomal protein S6. S6K1 and mTORC1 are caught in a positive feedback loop.

In other words we have a complex integration of signals that converge on the ribosome in order to influence protein synthesis by sensing energy levels in the cell. Basically, AMPK decreases protein synthesis (mTORc1).

Under optimal physiological conditions, the AMP-to-ATP ratio is maintained at a level of
0.01 (*)

(*) Hardie DG and Hawley SA. AMP-activated protein kinase: the energy
charge hypothesis revisited. Bioessays 23: 1112–1119, 2001..

And here is something entirely different: sensing ph-levels.

“Intracellular acidification, another stimulator of in vivo
cAMP synthesis, but not glucose, caused an increase in
the GTP/GDP ratio on the Ras proteins.” (RollandFetal2002)

So there is a lot that is very interesting about cAMPs connection to cellular state sensing, and mediating between cellular state and protein synthesis.

Non-modifiable synapses

Technical papers sometimes make a distinction between modifiable and non-modifiable synapses. Results on NMDA-NR2A vs. NMDA-NR2B receptors show that a NR2B receptor is required for LTP (learning), and a synapse which contains only NMDA-NR2A and AMPA glutamatergic receptors is not modifiable (except maybe for some LTD).

http://www.ncbi.nlm.nih.gov/pubmed/20164351

“Thus, for LTP induction, the physical presence of NR2B and its cytoplasmic tail are more important than the activation of NR2B–NMDARs, suggesting an essential function of NR2B as a mediator of protein interactions independent of its channel contribution. In contrast, the presence of NR2A is not essential for LTP, and, in fact, the cytoplasmic tail of NR2A seems to inhibit the induction of LTP.”

“The apparent contradiction can be explained by the fact that NR2B, in addition to forming part of a ligand-gated channel, also has a long cytoplasmic tail that binds (directly or indirectly) to a variety of postsynaptic signaling molecules.”

“Our data cannot distinguish whether these collaborating NR2A and NR2B subunits are in the same (triheteromeric) NMDAR complex or in different NMDARs (NR2A–NMDARs and NR2B–NMDARs) that lie near each other.”

“We hypothesize that the NR2A subunit also has a dual function. On one hand, it acts as a channel that facilitates LTP by conducting calcium. On the other hand, it acts as a scaffold that presumably recruits a protein to the synapse that inhibits LTP. Such a protein could act by antagonizing the activation of LTP signaling pathways [e.g., synaptic Ras-GTPase-activating protein (Kim et al., 2005)] or by stimulating the LTD signaling pathways [such as Rap and p38 mitogen-activated protein kinase) (Thomas and Huganir, 2004; Zhu et al., 2005; Li et al., 2006)].”

“Consistent with this idea, changes in plasticity in the visual cortex are correlated with a change in the NR2A/NR2B ratio. Light deprivation lowers the threshold of induction for LTP and is associated with a decrease in synaptic NR2A.”

It is hard to prove a negative, a non-modifiable synapse.  Since NR2B-containing receptors often occur in extrasynaptic positions, stimulation protocols may also recruit them to the synapse.  Triheteromeric receptor complexes do occur, but they may be less common. Also, the role of the NR2D and NR2C subunits, which may also bind to NR1, and which may help in creating Nr2A-type only synapses, is less clear.

Nonetheless it may be justified to make a distinction between “hard-to-modify” synapses and easily modifiable synapses.

This is entirely different from the “silent synapses”, which, because they contain no AMPA receptors, do not contribute (much) to fast signal transmission, but which may be recruited to full synapses, because they do contain NMDA-Nr2B receptors.

Homeostatic Regulation – LDL Receptors

A recent news story

covered the development of drugs targeting PCSK9, a “pro-protein” that decreases the density of LDL receptors (e.g. in the liver). The interesting part for the computational biologist is the regulation of LDL receptors: The density of LDL receptors depends on the amount of LDL available in the bloodstream. With more LDL, their density increases. Another way to increase LDLR density are statins (drugs). However, statins also activate PCSK9. And PCSK9 acts to decrease LDL. In other words, it looks as if we have a classic homeostatic regulation, and by interfering with it at one point, we may activate processes that counteract the wanted drug effect. If we add PCSK9 inhibitors now (by monoclonal antibodies, or as in this case by miRNA interference), we believe we may have a more radical effect on keeping LDLR active. In any case, it tells us that we need to understand a system that we interfere with.