Playing Qames with Probability

As with the Bible codes, pseudomathematics often takes the form of bogus probability arguments. Pseudoscientists love probability because it offers a quick route to their desired conclusion. Want to prove evolution impossible? Just use some unjustified estimates about the probability of various events, and presto! You've proved what you want, and "mathematically" to boot. This leads us to the first problem with Dembski's reasoning: the hazy rationale for the assignment of probabilities to events.

What is probability, precisely? There are many different philosophical interpretations. A frequentist would say that probability deals with many repeated observations: the more events we observe, the more likely a measured probability will be close to the "true" probability. Consider flipping an ordinary pair of dice. The probability of obtaining the outcome 7 for an ideal pair of dice is 1/6. But due to imperfections in the dice and slight variations in the weights of the sides, the probability for any real pair of dice will not be 1/ 6 but some close approximation to it. What is that probability? We may be able to deduce it from a physical model of the dice. But we can also measure it empirically with high confidence, by flipping the dice millions of times.

On the other hand, the events Dembski is most interested in are singular: receipt of a message from extraterrestrials, the origin of life, the origin of the flagellum of the bacterium E. coli, and so forth. By their very nature, such events do not consist of repeated observations; hence, we cannot assign to them an empirically measured probability. Similarly, because their origins are obscure and we do not currently have a detailed physical model, we cannot assign a probability based on that model. Any probability argument for such events therefore affords a splendid opportunity for mischief.

Dembski himself is inconsistent in his method of assigning probabilities. If a human being was involved in the event's production, Dembski typically estimates its probability relative to a uniform probability hypothesis; let's call this the uniform-probability interpretation. For Dembski, a Shakespearean sonnet exhibits CSI because it would be unlikely to be produced by choosing several hundred letters uniformly at random from the alphabet.

On the other hand, if no human being was involved, Dembski nearly always bases his probability calculations on the known causal history of the event in question; let's call this the historical interpretation. This flexibility in the choice of a distribution allows Dembski to conclude or reject design almost at whim.

Sometimes he uses these two different methods of calculating probability in the same example. Consider his analysis of a version of Richard Daw-kins's (1986, 46-48) METHINKS IT IS LIKE A WEASEL program. In this program, Dawkins shows how a simple computer simulation of mutation and natural selection can, starting with an initially random 28-letter sequence of capital letters and spaces, quickly converge on a target sentence taken from Hamlet. In No Free Lunch, Dembski (2002b) writes,

Complexity and probability therefore vary inversely—the greater the complexity, the smaller the probability. It follows that Dawkins's evolutionary algorithm, by vastly increasing the probability of getting the target sequence, vastly decreases the complexity inherent in that sequence. As the sole possibility that Dawkins's evolutionary algorithm can attain, the target sequence in fact has minimal complexity (i.e., the probability is 1 and the complexity, as measured by the usual information measure is 0). Evolutionary algorithms are therefore incapable of generating true complexity. And since they cannot generate true complexity, they cannot generate true specified complexity either. (183)

Here Dembski seems to be arguing that we should take into account how the phrase METHINKS IT IS LIKE A WEASEL is generated when computing its complexity or the amount of information it contains. Since the program that generates the phrase does so with probability 1 and 2-0 = 1, the specified complexity of the phrase is 0 bits.

But in other passages of No Free Lunch, Dembski seems to abandon this viewpoint. Writing about another variant of Dawkins's program, he says, the phase space consists of all sequences 28 characters in length comprising upper case Roman letters and spaces. ... A uniform probability on this space assigns equal probability to each of these sequences—the probability value is approximately 1 in 1040 and signals a highly improbable state of affairs. It is this improbability that corresponds to the complexity of the target sequence and which by its explicit identification specifies the sequence and thus renders it an instance of specified complexity. (188-89)

Here his use of a uniform probability model is explicit. Later, he says,

It would seem, then, that E has generated specified complexity after all. To be sure, not in the sense of generating a target sequence that is inherently improbable for the algorithm (as with Dawkins's original example, the evolutionary algorithm here converges to the target sequence with probability 1). Nonetheless, with respect to the original uniform probability on the phase space, which assigned to each sequence a probability of around 1 in 1040, E appears to have done just that, to wit, generate a highly improbable specified event, or what we are calling specified complexity. (194)

In the latter two quotations, Dembski seems to be arguing that the causal history that produced the phrase METHINKS IT IS LIKE A WEASEL should be ignored; instead, we should compute the information contained in the result based on a uniform distribution on all strings of length 28 over an alphabet of size 27. (Note that 2728 is about 1.2 x 1040.) The uniform-probability interpretation and the historical interpretation can give wildly differing results, and Dembski apparently cannot commit himself to one or the other, even in the context of a single example.

The second problem with Dembski's work concerns selecting the reference class of events to which an observed event E belongs. Observed physical events do not typically come with probability spaces attached. If we encounter a string of a thousand 0's, should we regard it as a string chosen from an alphabet consisting of just the single 0 or the alphabet {0, 1}? Should we regard it as chosen from the space of all strings of length 1000, or all strings of length 31000? Dembski's advice is unhelpful here; he says the choice of distribution depends on our "context of inquiry" and suggests "erring on the side of abundance in assigning possibilities to a reference class" (Dembski 2002, sec. 3.3). But following this advice means we are susceptible to dramatic inflation of our estimate of the information contained in a target, because we may well be overestimating the number of possibilities. Such an overestimate results in a smaller probability, and the smaller the probability, the larger the number of bits of specified complexity Dembski says the event contains.

Was this article helpful?

0 0

Post a comment