Egocentric representations for general input

The individual neuron’s state need not be determined only by the inputs received.

(a). It may additionally be seeded with a probability for adaptation that is distributed wrt the graph properties of the neuron (like betweenness centrality, choke points etc.), as well as the neuron’s current intrinsic excitability (IE) (which are related). This seeded probability would correspond to a sensitivity of the neuron to the representation that is produced by the subnetwork. The input representation is transformed by the properties of the subnetwork.

(b). Another way to influence neurons independent of their input is to link them together. This can be done by simulating of neuromodulators (NMs) which influence adaptivity for a subset of neurons within the network. There are then neurons which are linked together and increase or turn on their adaptivity because they share the same NM receptors. Different sets of neurons can become activated and increase their adaptivity, whenever a sufficient level of a NM is reached. An additional learning task is then to identify suitable sets of neurons. For instance, neurons may encode aspects of the input representation that result from additional, i.e. attentional, signals co-occuring with the input.

(c). Finally, both E and I neurons are known to consist of morphologically and genetically distinct types. This opens up additional ways of creating heterogeneous networks from these neuron types and have distinct adaptation rules for them. Some of the neurons may not even be adaptive, or barely adaptive, while others may be adaptive only once, (write once, read-only), or be capable only of upregulation, until they have received their limit. (This applies to synaptic and intrinsic adaptation). Certain neurons may have to follow the idea of unlimited adaptation in both directions in order to make such models viable.

Similar variants in neuron behavior are known from technical applications of ANNs: hyperparameters that link individual parameters into groups (‘weight sharing’) have been used, terms like ‘bypassing’ mean that some neurons do not adjust, only transmit, and ‘gating’ means that neurons may regulate the extent of transmission of a signal (cf. LSTM, ScardapaneSetal2008). Separately, the model ADAM (or ADAMW) has been proposed which computes adaptive learning rates for each neuron and achieves fast convergence.

A neuron-centric biological network model (‘neuronal automaton’) offers a systematic approach to such differences in adaptation. As suggested, biological neurons have different capacities for adaptation and this may extend to their synaptic connections as well. The model would allow to learn different activation functions and different adaptivity for each neuron, helped by linking neurons into groups and using fixed genetic types in the setup of the network. In each specific case the input is represented by the structural and functional constraints of the network and therefore transformed into an internal, egocentric representation.

Antagonistic regulation for cellular intelligence

Cellular intelligence refers to information processing in single cells, i.e. genetic regulation, protein signaling and metabolic processing, all tightly integrated with each other. The goal is to uncover general ‘rules of life’ wrt e.g. the transmission of information, homeostatic and multistable regulation, learning and memory (habituation, sensitization etc.). These principles extend from unicellular organisms like bacteria to specialized cells, which are parts of a multicellular organism.

A prominent example is the ubiquitous role of feedback cycles in cellular information processing. These are often nested, or connected to a central hub, as a set of negative feedback cycles, sometimes interspersed with positive feedback cycles as well. Starting from Norbert Wiener’s work on cybernetics, we have gained a deeper understanding of this regulatory motif, and the complex modules that can be built from a multitude of these cycles by modeling as well as mathematical analysis.

Another motif that is similar in significance and ubiquity is antagonistic interaction. A prototypical antagonistic interaction consists of a signal, two pathways, one negative, one positive, and a target. The signal connects to the target by both pathways. No further parts are required.

On the face of it, this interaction seems redundant. When you connect a signal to a target by a positive and a negative connection, the amount of change is a sum of both connections, and for this, one connection should be sufficient. But this motif is actually very widespread and powerful, and there are two main aspects to this:

A. Gearshifting, scale-invariance or digitalization of input: for an input signal that can occur at different strengths, the antagonistic transmission allows to shift the signal to a lower level/gear with a limited bandwidth compared to the input range. This can also be described as scale-invariance or standardization of the input, or in the extreme case, digitalization of an analog input signal.

B. Fast onset-slow offset response curves: in this case the double transmission lines are used with a time delay. The positive interaction is fast, the negative interaction is slow. Therefore there is a fast peak response with a slower relaxation time– useful in many biological contexts, where fast reaction times are crucial.

There is a connection to negative feedback cycles which can achieve similar effects by acting on the signal itself: the positive signal is counteracted by a negative input which reduces the input signal. With antagonistic interactions the original signal is left intact, so it may act on different targets unchanged.

Modules that can be built from both antagonistic interactions and feedback have not been explored in detail. However, one example is morphogenetic patterning, often referred to as ‘Turing patterns’, which relies on a positive feedback cycle by an activator, plus antagonistic interactions (activator/inhibitor) with a time delay for the inhibitor.

 turing_pattern