The whole time series of spike occurrences is assumed to be an expression of some fundamental process governing the activity of the neurons being recorded. When a specific input pattern activates a cell assembly, the neurons are activated following a certain mode. Then, a mode of activity defines how an information is processed within a neural network and how it is associated to the output pattern of activity that is generated. In this framework the state of the neural network is defined by a set of parameters characterizing the neural network at a certain time. Then, the state of the network at any given time is represented by the values of these parameters and a network state is fully determined if all parameters are known for each neuron. If we were able ab absurdo to set the same initial conditions for all elements of the neural networks we would obtain the same spike trains.

For sake of simplicity, it is rational to describe the activity of the network with the spike trains of all its elements. Spike trains are statistically expressed by pointlike processes with the meaning that point process system are systems whose input and output are point processes. Let us consider a simple example of point process system, whose dynamics is characterised by discrete steps in time. Let {x}, i = 1,...K, be a time series with Kpoints, where x represents the state of the system. In a dynamical system the subsequent state of the system is determined by its present state. The simplest expression would be to consider a map defined by x + t = ax, where a is a control parameter.

The biological systems, and the brain in particular, are often characterised by feedback mechanisms. The expression x+t = axi (1 - x), known as the logistic map, illustrates a simple dynamical system with a negative non-linear feedback, defined for xe [0,1]. It is clear from this expression that the time arrow is non-reversible, because it is always possible for each xi to obtain a value xi + 1 but there are two possible x for each x + 1.

A dynamical system as a whole is said to be deterministic if it is possible to predict precisely the evolution of the system in time if one knows exactly the initial conditions. However, a slight change or incorrect measurement in the initial conditions results in a seemingly unpredictable evolution of the system. A passage in time of a state defines a process. Whenever a process is completely deterministic at each step of its temporal evolution but unpredictable over the long term, it is called chaotic process, or simply chaos.

An equivalent definition of a process is a path over time, or trajectory, in the space of states. The points approached by the trajectory as the time increases to infinity are called fixed points and the set of these points forms an attractor. If the evolution in time of the system is described by a trajectory forming a closed loop -also referred to as a periodic orbit - then the system is said to have a limit cycle. A closer look at the logistic equation shows that for all values 0 < a < 1 the iterated series decay towards zero. For some other values of a, e.g. 1.7 or 2.1, the series converges to a fixed point, equal to 0.52381 and 0.41176, respectively. Conversely, for a = 3.2 the series converges to two alternating fixed points, i.e. x+2 = x, and for a = 3.52 it converges to four alternating fixed points, i.e. x + 4 = x. The trajectories of the three last examples are periodic orbits, with periods equal to 1, 2, and 4, respectively.

A further analysis of the logistic equation shows that with a control parameter equal to 4.0 and an initial condition x0 = 0.6 the system tends to decay to zero, but with an initial condition x0 = 0.4 the dynamics never produces a repeating sequence of states. This aperiodic behaviour is different from randomness, or white noise, because an iterated value xi can only occur once in the series, otherwise due to the deterministic dynamics of the system the next value must be also a repetition, and so on for all subsequent values. In general, brief initial perturbations applied to any combination of the governing set of parameters move a dynamical system characterised by fixed points away from the periodic orbits but with the passing of time the trajectory collapses asymptotically to the same attractor. If the system is deterministic, yet sensitive to small initial perturbations, the trajectory defining its dynamics is an aperiodic orbit, then the system is said to have a chaotic attractor, often referred to as a strange attractor.

Spike trains are treated as point process systems and a crucial requirement for a theoretical framework is to identify these point process systems without any assumption as to whether or not they are linear. Point process systems are said to be identified when an acceptable model is found. The first step of the identification is to estimate certain conditional rate functions, called kernels, of the spike trains. The one of zero order, i.e. a constant, simply measures the mean firing rate - the average rate of action potentials per unit time. The one of first order, a function of a single time argument, relates to the average effect of a single trigger spike (pre-synaptic) on the spike train. The one of second order, a function of two time arguments, relates to the interactions between pairs of spikes. And so forth for higher-order functions. Then, successive models can be constructed recursively and based on the kernel of zero order, on the kernels of zero and first order, on the kernels of zero, first, and second order, and so on.

By extending this approach to the spike trains recorded from all elements of the neural network it is theoretically possible to develop an acceptable model for the identification of the system. Notice that the goodness of fit of a certain kernel estimate as plausible is evaluated by means of a function f describing its mode of activity - the mode of activity being defined by how information is processed within a neural network and how it is associated to the output pattern of activity that is generated. In formal terms, let us define a probability function f which describes how a state x is mapped into the space of states. If the function is set by a control parameter m we can write fm(x) = fx,m). A dynamical system x' is a subset of the space of states and can be obtained by taking the gradient of the probability function with respect to the state variable, that is x' = grad fm(x). Mathematically speaking, the space of states is a finite dimensional smooth manifold assuming that f is continuously differentiable and the system has a finite number of degrees of freedom (Smale, 1967).

For periodic activity the set of all possible perturbations define the inset of the attractor or its basin of attraction. In case of the logistic map, for a = 3.2 all initial conditions in the interval (0 < x0 < 0.6875; 0.6875 < x0 < 1) end up approaching the period 2 attractor. This interval is known as the basin of attraction for the period 2 attractor, whereas the value x0 = 0.6875 is a fixed point. If the activity is generated by chaotic attractors, whose trajectories are not represented by a limit set either before or after the perturbations, the attracting set may be viewed through the geometry of the topological manifold in which the trajectories mix. The function f corresponding to the logistic map, fx) = ax(1 - x), is the parabolic curve containing all the possible solutions for x. This function belongs to the single humped map functions which are smooth curves with single maxima (in here the single maximum is at x = 0.5).

Let us consider again the case of large neural networks where the complexity of the system is such that several attractors may appear, moving in space and time across different areas of the network. Such complex spatio-temporal activity may be viewed more generally as an attracting state, instead of simply an attractor. In particular, simulation studies demonstrated that a neural circuit activated by the same initial pattern tends to stabilise into a timely organised mode or in an asynchronous mode if the excitability of the circuit elements is adjusted to the firstorder kinetics of the post-synaptic potentials (Villa and Tetko, 1995; Hill and Villa, 1997).

Let us assume that the dynamical system is structurally stable. In terms of topology structural stability means that for a dynamical system x' it exists a neighborhood N(x') in the space of states with the property that every Ye N(x') is topologically equivalent to x . This assumption is extremely important because a structurally stable dynamical system cannot degenerate. As a consequence, there is no need to know the exact equations of the dynamical system because qualitative, approximate equations - i.e. in the neighbourhood - show the same qualitative behaviour (Andronov and Pontryagin, 1937) (Fig. 5).

In the case of two control parameters, x e IR, m e IR2, the probability function f is defined as the points m of IR2 with a structurally stable dynamics of x' = grad fm(x) (Peixoto, 1962). That means the qualitative dynamics x' is defined in a neighbourhood of a pair (x0,m0) at which f is in equilibrium (e.g. minima, maxima, saddle point). With these assumptions, the equilibrium surface is geometrically equivalent to the Riemann-Hugoniot or cusp catastrophe described by Thom (1975). The cusp catastrophe is the universal unfolding of the singularity fx) = x4 and the equilibrium surface is described by the equation V(x,u,v) = x4 + ax2 + bx, where a and b are the

control parameters. According to this model the equilibrium surface could represent stable modes of activity with post-synaptic potential kinetics and the membrane excitability as control parameters (Fig. 6).

The paths drawn on the cusp illustrate several types of transitions between network states. Point (a) on the equilibrium surface of Fig. 6 corresponds to a high level of excitability and a relatively long decay time of the post-synaptic potentials, e.g. 12 ms. This may be associated to the tonic mode of firing described in the thalamocortical circuit, where bi-stability of firing activity has been well established. This is in agreement with the assumption that the same cell would respond in a different mode to other conditions.

Fig. 6 Topological interpretation of neural dynamics as a function of two control parameters, the cell excitability and the kinetics of the post-synaptic potentials. The equilibrium surface is represented by a cusp catastrophe where transitions can occur either suddenly or continuously between temporally organised firing patterns and asynchronous activity. This equilibrium surface refers to the activity of a specific neural network and it is possible the same cell belongs to more than one cell assembly. If the cell assemblies are controlled only by one parameter in common, then temporal and rate code are not mutually exclusive

Fig. 6 Topological interpretation of neural dynamics as a function of two control parameters, the cell excitability and the kinetics of the post-synaptic potentials. The equilibrium surface is represented by a cusp catastrophe where transitions can occur either suddenly or continuously between temporally organised firing patterns and asynchronous activity. This equilibrium surface refers to the activity of a specific neural network and it is possible the same cell belongs to more than one cell assembly. If the cell assemblies are controlled only by one parameter in common, then temporal and rate code are not mutually exclusive

In general, the same neural network may subserve several modes of activity through modulation of its connectivity, e.g. according to learning or pathological processes, or by modulation of its excitability, e.g. by modulation of the resting potential or of the synaptic time constants. Remember that the state of the neural network is defined by a set of characteristic control parameters at a certain time. Then, at any given time, the state of the network is represented by the values of control parameters and a network state is fully determined if all parameters are known for each neuron. It is not possible to know all variables determining brain dynamics, yet the progresses made by computational and statistical physics have brought a number of methods allowing to differentiate between random, i.e. unpredictable, and chaotic, i.e. seemingly unpredictable, spike trains.

In this theoretical framework at point (a) in Fig. 6 the network state is such that an input pattern will evoke precisely time-structured activity detectable by preferred firing sequences. These sequences should not be interpreted as 'Morse' code because different firing patterns might be evoked by the same input if the synaptic dynamics is changed within a certain range of cellular excitability, as suggested for neuromodulatory mediators.

Also, different input patterns of activity may produce similar modes of activity, somewhat like attractors. The transitions between these states are represented by paths (a-b-a), (a-e-a) and (a-g-a). Indeed it has been observed in the cortex and in the thalamus that several types of neurons tend to switch towards a rhythmic or bursty type of firing if the excitability is decreased. This effect may be provoked by a hyperpolarization of the cell membrane or by modifying the spike threshold level (Foote and Morrison, 1987). In the former case a smooth passage between timely structured activity and asynchronous firing is likely to occur, as suggested by path (b-c-b), especially if the synaptic decay is long. Conversely, if the synaptic decay is fast and a modulatory input modifies the threshold potential a sudden switch from temporal patterns of firing to desynchronised activity will occur, as indicated by paths (a-d) and (e-f ).

Complex spatio-temporal firing patterns may also occur with low levels of excitability, i.e. point (e) in Fig. 6, as suggested by cholinergic switching within neocor-tical networks (Villa et al., 1996; Xiang et al., 1998). Point (e) on the equilibrium surface can be particularly unstable because a further decrease in excitability, path (e-f), but also an increase in synaptic decay, path (e-d ), may provoke a sudden change in the mode of activity, as observed in simulation studies (Villa, 1992; Hill and Villa, 1997).

It is important to notice that if the excitability is low, e.g. during long lasting hyperpolarisation, the kinetics of the post-synaptic potential is often irrelevant regarding the input pattern so that the output activity will always tend to be organised in rhythmic bursts. Conversely, if the excitability is increased from a starting point (f) and the time constant of the synaptic decay is fast, say 4-5 ms, the input patterns could turn on either stable, path ( f-g) or unstable temporally organised modes of activity only through sudden transitions, path (f-e).

Was this article helpful?

## Post a comment