Other coding methods exist that are based on iterative processes, that is, the output is fed to the input by some feedback mechanism. These methods are called convo-lutives, in analogy with the mathematical operation of convolution that involves an integral calculus with delayed inputs. Convolutive methods are usually of the soft type. This means that they do not give a deterministic value of data at the decoder output, as in the case of the former parity block methods, but they assign a different probability to the different possible output values. In this way, a decision system may choose the one with the greater probability or follow other suitable criteria.
Among the more interesting convolutive methods we find the recently discovered ones called turbo codes (Battail, 1998; Berrou et al., 1993) which use also recursive decoding. The main advantage of these methods is that they approach the theoretical limits foreseen by the Shannon theory regarding their performance in error detection and correction (they approach the channel capacity (Sweeney, 2002) which represents a supremum of the code rate achievable). It should be also mentioned that other competing methods exist that reach similar performances: the so-called low-density sparse parity check methods (indeed these are classified inside the class of linear block coding methods that use iterative algorithms for decoding) (Gallager, 1962).
Because of the great optimization of most of the vital mechanisms of life, the emergence of errors in the transmission of the related information through the appropriate communication channels can be a source of serious or fatal problems for the organism involved. Researches at the level of genetic and neural information demonstrate that sophisticated mechanisms for error detection/correction indeed exist at the two above-mentioned levels (see for example, Fuss and Cooper, 2006; Farabaugh and Bjork, 1999; Stiber, 2005; Stiber and Pottorf, 2004). Leaving aside the identity of the actual biological mechanisms implementing such functions, the flux of biological information cannot avoid the general laws governing information transmission, such as, for example, the results of the second Shannon theorem. For this reason is licit to ask if life does not use systems similar to those described above, developed by humans for digital communications, for implementing at a practical level error detection and correction of relevant biological information. The main lesson that can be learned from the Shannon theorem is that for transmitting error-free information through unreliable transmission channels it is necessary to introduce redundancy, that is, additional information or "check bits" to the original information. In the case of the genetic code, this redundancy is evident: different codons may code for a single amino acid. The information to be transmitted is represented by a finite number of words, in this case 21, i.e. the 20 amino acids plus the stop signal. But for encoding these 21 words we have 64 elements, i.e. the 64 codons. Necessarily the representation of 21 words by means of 64 elements is redundant: more than one element encodes the same word (amino acid). Recalling the Shannon theorem, the scope of this redundancy seems to be clear: to ensure the fidelity of the transmitted message, i.e. the chain of amino acids defining a particular protein.
At this point it seems opportune to try to understand a second question: how can a coding and decoding system be implemented ensuring fidelity of the information transmission by means of the natural redundancy of the genetic system? The most immediate answer should be that the system is based on a random coding, that is, there does not exist a pre-programmed order facilitating the coding and decoding operations as in the case of man-made digital communication systems.
Although a random coding can show a strong error correction capability, the main drawback at a practical level is that the decoding will be limited by an efficiency constraint. A random coding can give results as good as a structured one, but the efficiency of the former can be dramatically lower; this efficiency is made apparent in practical terms in excessively long times for retrieving the error-free information. However, in the genetic machinery, the ribosome is a highly efficient machine, assembling amino acids with incredibly low error rates and operating in a reactive medium that contains a very mixed soup of the components to be assembled. Is it possible to achieve this performance without resorting to sophisticated methods of coding and decoding of the genetic information? This is a question of fundamental importance that needs a cooperative effort from different fundamental and experimental branches of science to be definitively resolved. Its solution may contribute to a qualitative jump in different fields, and not least, to create a new basis for genetic therapies.
In the rest of this paper, instead of developing more technically complicated topics related to error detection and correction, such as, for example, those related to the above-mentioned turbo codes, we develop an interesting conceptual alternative: the use of nonlinear dynamics for implementing error control and related signal processing in biological systems. The theory of dynamical systems has produced the last scientific revolution in the physico-mathematical sciences (Strogatz, 1994). This new research discipline has led in a few years to important theoretical and experimental results, such as those related to chaos theory, chaos control, synchronization in all its variants, stochastic resonance, emergent behavior, spatiotemporal auto-organization (Bak, 1996), etc. These important areas related to nonlinear dynamics have found key applications in many different disciplines, such as for example, chemical dynamics, meteorology, astronomy, electronics, etc., but have also penetrated into the biological and social sciences, with examples such as ecology and population dynamics, neural and cardiac dynamics, metabolic pathways, psychology, and economics. This corpus of knowledge has contributed also to create a new science of complexity based on nonlinear dynamics and the emergence of spatiotemporal structures. The power of nonlinear modeling has been demonstrated through the properties of complexity, universality, and dimensionality reduction, which are associated with the generic behavior of dynamical nonlinear systems. Among others, these properties allow for qualitative modeling of complex spatial and temporal behavior, and the implementation of logical functions including the generation of universal computation machines such as Turing machines (Prusha and Lindner, 1999; Sinha and Ditto, 1998). Moreover, two important properties of dynamical systems have also been demonstrated regarding error correction, i.e. stochastic resonance (Moss and Wiesenfeld, 1995), and chaos communication (Bollt, 2003). The first corresponds to a paradoxical effect in which the addition of noise to a system can improve the signal to noise ratio. Many different examples and variants of this phenomenon have been studied, corresponding more or less to a statistical filtering of the signal - the meaningful information - that needs to be improved. Outstanding examples are found in the neural processing of sensorial information (Mitaim and Kosko, 1998; Longtin, 1993).
The second case, instead, is more similar to that of standard deterministic systems for error correction, but maintains, however, interesting peculiarities. In the following we try to show how a communication system can be implemented on the basis of the second alternative, leading to general properties of usual communication systems including error detection and correction capabilities.
A typical configuration for chaos synchronization represented as a communication system is shown in Fig. 1 (Corron and Pethel, 2003; Pethel et al., 2003). Chaos synchronization can be observed by comparing the output of the master oscillator with the output of the slave. When, to a certain precision, the two signals are equal we say that the system is synchronized. Usually the oscillators are continuous systems producing continuous signals, thus we need to convert their outputs into a discrete set of values (pertaining to some finite alphabet of signs).
The encoder is in charge of this operation, first, by converting the continuous trajectory of the master oscillator into a discrete iteration, for example using a Poincaré map, and secondly, by posterior discretization of the continuous variables of the discrete iteration, converting them into discrete variables which pertain to an alphabet of signs. This last operation is performed through symbolic dynamics and consists in partitioning the space state into a discrete number of non-overlapping regions (Hao and Zheng, 1998). Through successive iterations, the state of the system moves between these regions, defining a symbolic trajectory; to every region is assigned a discrete symbol. In order to attain synchronization we do not need complete information about the state of the master oscillator. Master and slave oscillators, beside their chaotic behavior, are also identical to a certain precision, which can be measured at the level of the parameters defining them. Supposing that initially both oscillators are in the same state, at the following iteration the output of the slave oscillator becomes different to that of the master oscillator to a certain extent. This is because of the chaotic character of both oscillators, which produces divergence in their evolution arising from minimal differences in their initial conditions. It is clear at this point that in order to maintain synchronization we do not
Fig. 1 Block diagram of a synchronized system of two identical chaotic oscillators with unidirectional coupling represented as a communication system need to send through the communication channel all the information characterizing the state of the master oscillator; the information specifying the difference between the two states is sufficient. In technical terms, we may say that this condition requires that the Kolmogorov-Sinai entropy of the master system be lower than the channel capacity (the supremum of the coding bit rate) (Stojanovski et al., 1997). It must be remarked that to ensure the synchronization to a certain degree of precision automatically implies an error correction capability: we recover to the desired precision the information sent through the transmission channel. Moreover, more oscillators can be coupled in series or in parallel to the same master oscillator. This possibility allows for practical implementations, which implies communication between spatially separated points. As examples of possible biological applications we can think of the ribosome complex in protein synthesis (see Chapter 6, this volume), or of neural communication of sensorial information. For this last case, it has been demonstrated that high-level sensorial parameters can be described using dynamical attractors (Cartwright et al., 1999, 2001). Neurons are highly nonlinear systems and thus the behavior of neuronal populations can describe this kind of phenomenon in a natural way (Tonnelier et al., 1999). Furthermore it has been demonstrated that on the same basis the global dynamics of neural systems can show error-correction capabilities (Stiber, 2004).
Error detection/correction codes are unavoidable in man-made digital information transmission systems because of the necessity to ensure integrity of the transmitted information. Living organisms are subjected to similar demands regarding the integrity of crucial biological information. Because any kind of communication system is ruled by very basic principles of communication theory, the possible biological solutions to the problem of information protection need to resort to the same kind of mechanisms that have been sketched in this chapter. Because the most sophisticated artificial error-correction methods may be put in terms of dynamical systems theory (Richardson, 2000; Agrawal, 2001) and because many biological processes are highly nonlinear in nature, the hypothesis of using dynamical systems for error detection and correction in biological information control and management seems to be a very natural one, and should be explored in depth.
Acknowledgements I wish to thank Professor Marcello Barbieri for his invitation to write this contribution. I am also profoundly indebted to Dr. Julyan Cartwright for his useful suggestions and careful reading of the manuscript.
Was this article helpful?