The Law of Conservation of Information

Physics, as we learned it in high school, is bristling with laws: Newton's inverse-square law, Boyle's ideal gas law, the second law of thermodynamics, and so forth. A common misconception about physical laws is that they are analogous to human-made laws. (As the joke goes, "Speed limit 186,000 miles per second: not just a good idea, it's the law!") But they aren't. In physics, the term law is just shorthand for a simple mathematical relationship between physical quantities—a relationship that has been confirmed over and over again in thousands of experiments. True, physical laws don't always apply under all conditions: for example, the ideal-gas laws are just approximations and fail at low temperature and high pressure. Nevertheless, law in physics means something that is an accurate description of phenomena over a wide range of conditions.

Calling something a law has good public relations value. (Zellers, a Canadian department store chain, claims that, in their stores, "the lowest price is the law.") There is a certain cachet about the word law, which makes anything labeled as a law seem inviolate. If you have a new theory and call it a law, who can argue with it? (Even better, capitalize it and call it a Law.)

And so we come to Dembski's most grandiose claim, his law of conservation of information (LCI). It has, he tells us, "profound implications for science" (Dembski 2002b, 163). One version of LCI states that CSI cannot be generated by natural causes; another states that neither functions nor random chance can generate CSI. We will see that there is simply no reason to accept Dembski's "law" and that his justification is fatally flawed in several respects.

Suppose we have a space of possible events, each with an associated probability. To keep things simple, let's suppose our class is the set of all strings of 0's and 1's of length n, where each symbol occurs with probability V2, so each possible string has an associated probability of 2-n. Now suppose we have a function f that acts on elements of this space, producing new binary strings of the same length. We write f(x) = y, meaning that f acts on a string x, producing a string y.

Dembski argues that, no matter what the function f is, if the string y = f(x) has a certain amount of specified complexity, then x has at least the same amount. His argument uses basic probability theory and is technical, so we won't repeat it here. But it is flawed, and the flaw depends on Dembski's ambiguous notion of specification. Since y has specified complexity, there is an accompanying specification T. Dembski now claims that f-1(T), the inverse function of f applied to T, is a specification for x. But remember that patterns are supposed to be "explicitly and univocally" identifiable with the background knowledge of an intelligent agent. Why should f-1(T) be so identifiable? After all, the claim is supposed to apply to all functions f, not just the f known to the intelligent agent A computing the specified complexity. The function f might be totally unknown to A—for example, if f occurred in the long-distant past. In fact, there is no reason to believe that A will be able to deduce that x is specified, so x has no specified complexity at all for A. Thus, applying functions f can, in fact, generate specified complexity.

To look at a more-concrete example, let's suppose y is a string of bits containing an English message, perhaps in ASCII code, and f is an obscure decryption function, such as RSA decoding. We start with a string of bits x having apparently no pattern at all; we apply f, and we get y, which encodes the message CREATIONISM IS UTTER BUNK. Any intelligent agent can recognize y as fitting a pattern (for example, the set of all true English sentences), but who will recognize x as fitting a pattern? Only those who know f. This objection alone should be enough to convince the reader that the law of conservation of information is bogus. But there is yet another problem.

The second problem with LCI is that, as we have observed, Dembski uses two different methods to assess the probability of an observed outcome: the uniform-probability interpretation and the historical interpretation. Dembski's mathematical argument applies only to the historical approach. But as we have seen, this implies that the complete causal history of an observed event must be known; if a single step is omitted, we may estimate the probability of the event improperly. This is fatal to Dembski's program of estimating the specified complexity of biological organisms, because the individual steps of their precise evolutionary history are largely lost with the passage of time.

Although his mathematical justification for LCI depends on the historical interpretation, he rarely appeals to it. Instead, he uses the less-demanding uniform-probability interpretation. But LCI fails for this interpretation.

Let's look at an example. Consider again the case in which we are examining binary strings of length n and define our function f to take a string x as an input and duplicate it, resulting in the string y = xx of length 2n. For example, /(0100) = 01000100. Under the uniform probability interpretation, when we witness an occurrence of y, we will naturally view it as living in the space of strings of length 2n, with each such string having probability 2-2n. Furthermore, any y produced in this manner is described by the specification "the first and last halves are the same." A randomly chosen string of length 2n matches this specification with probability 2-n. Thus, under the uniform probability interpretation, we have two alternatives: either every string of length n is specified (so specification is a vacuous concept) or f actually produces specified complexity!

The law of conservation of information is no law at all.

0 0

Post a comment