MATT YOUNG AND TANER EDIS
Intelligent design is the successor to old-fashioned creationism but dressed in a new coat—its hair cut, its beard trimmed, and its clock set back 10 or 15 billion years. It is nevertheless a hair's-breadth away from creationism in its insistence that everyone is wrong but its proponents, that science is too rigid to accept what is obvious, and that intelligent-design advocates are the victims of a massive conspiracy to withhold the recognition that their insights deserve.
Creationism, though very popular in its young-earth version, has failed as a strategy for introducing religious beliefs into the science curriculum. Enter neocreationism, or intelligent design. Not as obviously a religious conceit as creationism, intelligent-design creationism has made a case that, to the public, appears much stronger. Pertinently, its proponents are sometimes coy about the identity of their designer. They admit to the age of the earth or set aside the issue, and some even give qualified assent to pillars of evolutionary theory, such as descent with modification. They have therefore been able to feign a scientific legitimacy that creationism was never able to attain.
This aura of legitimacy has enabled the proponents of intelligent design to appeal to the public's sense of fairness and ask that intelligent design be added to school curricula, alongside Darwinian evolution, as an intellectually substantial alternative. Intelligent design, however, has found no support whatsoever from mainstream scientists, and its proponents have not established a publication record in recognized and peer-reviewed scientific journals. They have nevertheless raised a significant sum of money and embarked on a single-minded campaign to inject intelligent design into the science curriculum.
Biblical literalism, in its North American form, took shape in the 1830s. One impetus was the attack on slavery by religious abolitionists. Slave owners or their ministers responded by citing biblical passages, notably Genesis 9:24-27, as justification for enslaving black people: "And Noah awoke from his wine, and knew what his younger son [Ham, the supposed ancestor of black people] had done to him. And he said, Cursed be Canaan [son of Ham], a servant of servants shall he be unto his brethren. ... Canaan shall be [Shem's] servant... and [Japheth's] servant" (King James version).
At about the same time, the millennialist strain in Christianity began a resurgence in Britain and North America. This movement, the precursor of modern fundamentalism, also stressed the literal truth of the Bible (Sandeen 1970). Most millennarians and their descendants, however, adjusted their "literal" reading of Genesis to accommodate the antiquity of the earth. Some accepted the gap theory: that God created the heavens and the earth in the beginning but created humans after a gap of millions or billions of years. Others accepted the day-age theory, which recognized the days mentioned in Genesis as eons rather than literal 24-hour days. There was, therefore, no contradiction between science and their religious beliefs. Many evangelical thinkers went as far as to accept not only an old earth but even biological evolution, provided that evolution was understood as a progressive development guided by God and culminating in humanity (Livingstone 1987).
Evolution education did not become a fundamentalist target until the early twentieth century. Then, in the aftermath of the Scopes trial, literalist Christianity retreated into its own subculture. Even in conservative circles, the idea of a young earth all but disappeared (Numbers 1992).
The pivotal event behind the revival of young-earth creationism was the 1961 publication of The Genesis Flood, co-authored by hydraulic engineer Henry M. Morris and conservative theologian John Whitcomb. Morris resurrected an older theory called flood geology and tried to show that observed geological features could be explained to be results of Noah's flood. In Morris's view, fossils are stratified in the geological record not because they were laid down over billions of years but because of the chronological order in which plants and animals succumbed to the worldwide flood. To Morris and his followers, the chronology in Genesis is literally true: the universe was created 6000 to 10,000 years ago in six literal days of 24 hours each. With time, Morris's young-earth creationism supplanted the gap theory and the day-age theory, even though some denominations and apologists, such as former astronomer Hugh Ross, still endorse those interpretations (Numbers 1992, Witham 2002).
Creationists campaigned to force young-earth creationism into the biology classroom, but their belief in a young earth, in particular, was too obviously religious. A few states, such as Arkansas in 1981, passed "balanced-treatment" acts. Arkansas's act required that public schools teach creation science, the new name for flood geology, as a viable alternative to evolution. In 1982, Judge William Overton ruled that creation science was not science but religion and that teaching creation science was unconstitutional. Finally, the 1987 Supreme Court ruling, Edwards v. Aguillard, signaled the end of creation science as a force in the public schools (Larson 1989).
The intelligent-design movement sprang up after creation science failed. Beginning as a notion tossed around by some conservative Christian intellectuals in the 1980s, intelligent design first attracted public attention through the efforts of Phillip Johnson, the University of California law professor who wrote Darwin on Trial (1993). Johnson's case against evolution avoided blatant fundamentalism and concentrated its fire on the naturalistic approach of modern science, proposing a vague "intelligent design" as an alternative. Johnson was at least as concerned with the consequences of accepting evolution as with the truth of the theory.
In 1996, Johnson established the Center for Science and Culture at the Discovery Institute, a right-wing think tank. In 1999, the center had an operating budget of $750,000 and employed 45 fellows (Witham 2002, 222). Johnson named his next book The Wedge of Truth (2000) after the wedge strategy, which was spawned at the institute. According to a leaked document titled "The Wedge Strategy" (anonymous n.d.), whose validity has been established by Barbara Forrest (2001), the goal of the wedge is nothing less than the overthrow of materialism. The thin edge of the wedge was Johnson's book, Darwin on Trial.
The wedge strategy is a 5-year plan to publish 30 books and 100 technical and scientific papers as well as develop an opinion-making strategy and take legal action to inject intelligent-design theory into the public schools. Its religious overtone is explicit: "we also seek to build up a popular base of support among our natural constituency, namely, Christians. . . . We intend [our apologetics seminars] to encourage and equip believers with new scientific evidence's [sic] that support the faith" (anonymous n.d.).
Johnson remains a leader of the movement, although he is the public voice of intelligent design rather than an intellectual driving force. That role has passed to Michael Behe, William Dembski, and others.
The intelligent-design movement tries to appeal to a broad constituency, drawing on widely accepted intuitions about divine design in the world (see chapter 1). As the wedge document acknowledges, however, reaching beyond conservative Christian circles has been a problem. Success evidently requires a semblance of scientific legitimacy beyond lawyerly or philosophical arguments.
Thus, intelligent design has gathered steam with the publication of biochemist Michael Behe's book Darwin's Black Box (1996), which argues that certain biochemical structures are so complex that they could not have evolved by natural selection. Behe calls such complex structures irreducibly complex.
An irreducibly complex structure is any structure that includes three or more parts without which it cannot function. According to Behe, such a structure cannot have evolved by chance because it cannot function with only some of its parts and more than two parts are not likely to form a functioning whole spontaneously. Behe identifies, for example, the bacterial flagellum and the blood-clotting system as irreducibly complex. To prove his point, he relies heavily on the analogy of a mousetrap, which he says cannot function with any one of several parts missing. Behe's argument founders, however, on the pretense that the irreducibly complex components came together at once and in their present form; he makes no effort to show that they could not have coevolved. Chapter 2 shows that Behe's mousetrap is a failed analogy designed to hide this likelihood.
Many intelligent-design neocreationists accept what they call microevolution but reject macroevolution. That is, they accept the fact of change within a species but reject the idea that a species may evolve into a new species. Chapter 3 shows that their assignment of living organisms into kinds is incoherent and that there is no substantive difference, no quantitative demarcation, between microevolution and macroevolution. The distinction is wholly arbitrary and fragments the tree of life, whereas common descent provides a neat and compact picture that explains all the available evidence.
Chapter 4 shows that the scientific evidence Behe presents is equally flawed. Behe discounts the importance of the fossil record and natural selection and adopts a belief in a designer outside nature because of the concept of an irreducibly complex system, which he cannot defend. He further points to a supposed absence of scientific articles describing the evolution of biochemical systems deemed to be irreducibly complex and a paucity of entries for the word evolution in the indexes of biochemistry textbooks. Behe is a legitimate scientist, with a good record of publication. Nevertheless, his claims, which he likens to the discoveries of Newton and Copernicus, are not well regarded by most biologists, and they are reminiscent of standard God-of-the-gaps arguments.
Chapters 5 and 6 develop the theme introduced in chapter 4. Chapter 5 explains how an irreducibly complex structure can readily evolve by exapting existing parts and then adapting them to new functions. These new functions take form gradually, as when a feathered arm that originally developed for warmth turns out to be useful for scrambling uphill and only gradually adapts for flying. Chapter 5 details precisely how such exaptation-adaptation gradually formed the avian wing.
The eubacterial flagellum is one of the favorites of the intelligent-design proponents and occupies a place in their pantheon that is analogous to the place of the eye in the creationist pantheon. Chapter 6 shows that the flagel-lum is by no means an "outboard motor" but a multifunctional organelle that evolved by exaptation from organelles whose function was primarily secretion, not motility. It is not irreducibly complex.
Chapter 7 links the previous chapters to those that follow. It shows how the laws of thermodynamics do not preclude self-organization, provided that there is energy flow through the system. In addition to energy flow (an open system), self-organization requires only a collection of suitable components such as atoms or molecules, cells, organisms (for example, an insect in an insect society), and even the stellar components of galaxies, which self-organize through gravitational energy into giant rotating spirals. Using two examples, Benard cells and wasps' nests, chapter 7 demonstrates how complex structures can develop without global planning.
Intelligent Design in Physics and Information Theory
Behe passed the torch to mathematician and philosopher William Dembski, who claims to have established a rigorous method for detecting the products of intelligent design and declares further that the Darwinian mechanism is incapable of genuine creativity. Hiding behind a smoke screen of complex terminology and abstruse mathematics, Dembski in essence promulgates a simple probabilistic argument, very similar to that used by the old creationists, to show that mere chance could never have assembled complex structures. Having failed to convince the scientific community that his work has any substance, Dembski nevertheless compares himself to the founders of thermodynamics and information theory and thinks he has discovered a fourth law of thermodynamics (Dembski 2002, 166-73).
Dembski has gone well beyond Behe with a mathematical theory of specified complexity. According to Dembski, we can establish whether or not an object or a creature was designed by referring to three concepts: contingency, complexity, and specification.
Contingency. Dembski looks to contingency to ensure that the object could not have been created by simple deterministic processes. He would not infer intelligent design from a crystal lattice, for example, because its orderly structure forms as a direct result of the physical properties of its constituents.
Complexity. Dembski defines the complexity of an object in terms of the probability of its appearance. An object that is highly improbable is by the same token highly complex.
Specification. Some patterns look like gibberish; some do not. Dembski calls a pattern that does not look like gibberish specified. More precisely, if a pattern resembles a known target, then that pattern is specified. If it does not, then it is a fabrication.
Many of Dembski's examples involve coin tosses. He imagines flipping a coin many times and calls the resulting sequence of heads and tails a pattern. He calculates the probability of a given pattern by assuming he has an unbiased coin that gives the same probability of heads as of tails—that is, V2. Using an argument based on the age of the universe, Dembski concludes that a contingent pattern that must be described by more than 500 bits of information cannot have formed by chance, although he is inconsistent about this limit in his examples.
If a pattern is both specified and complex, then it displays specified complexity, a term that Dembski uses interchangeably with complex specified information. Specified complexity, according to Dembski, cannot appear as the result of purely natural processes. Chapter 7 shows that specified complexity is inherently ill-defined and does not have the properties Dembski claims for it. Indeed, Dembski himself calculates the specified complexity of various events inconsistently, using one method when it suits him and another at other times.
In one example, he dismisses Bénard cells as examples of naturally occurring complexity; they form, he says, as a direct result of the properties of water. Chapter 7 shows that, to the contrary, Bénard cells are highly complex. Chapter 2 also shows how Dembski dismisses the formation of a complex entity such as a snowflake in the same way.
Dembski employs an explanatory filter that purports to use the concepts of contingency, complexity, and specification to distinguish design from chance and necessity. He argues that forensic scientists and archaeologists use a variation of the explanatory filter to infer design in those instances in which the designer is presumed to be human. Chapter 8 shows that forensic scientists do not solve problems using an explanatory filter; specified complexity and the explanatory filter do not provide a way to distinguish between designed objects and undesigned objects. Indeed, what Dembski calls side information is more important to a forensic scientist than the explanatory filter, which is virtually useless.
Attempts to distinguish rigorously between data that exhibit interesting patterns and data that are the result of simple natural laws or chance are not new. Chapter 9 explores approaches to this problem based on established complexity theory, a part of theoretical computer science. It shows that Dembski's idiosyncratic approach does not deliver what it promises and that mainstream science has much better ways to approach interesting questions about complex information.
Chapter 10 shows that randomness can help create innovation in a way that deterministic processes cannot. A hill-climbing algorithm that cannot see to the next hill may get stuck on a fairly low peak in a fitness landscape; further progress is thereby precluded. On the other hand, a random jump every now and then may well carry the algorithm to the base of a taller peak, which it can then scale. Randomness is not inimical to evolution; on the contrary, randomness is critical for its ability to produce genuine creative novelty. Chapter 10 draws upon artificial-intelligence research to show that intelligence itself may be explainable in terms of chance plus necessity, a combination that escapes Dembski's explanatory filter with its stark black-and-white dichotomies.
Dembski extends his argument by applying the no-free-lunch theorems (NFL theorems) to biological evolution. These theorems apply to computer-aided optimization programs that are used, for example, to design a lens by a series of trial-and-error calculations that begin with a very poor design. Roughly, an optimization program is like a strategy for finding the highest mountain in a given range; the height of the mountain represents the value of some figure of merit that we calculate as we go along and whose value we try to maximize.
The NFL theorems, according to Dembski, show that no search algorithm performs better than a random search. In fact, chapter 11 shows that the theorems are much more restricted than Dembski makes out; they state only that no strategy is better than any other when averaged over all possible mountain ranges, or fitness landscapes. In practice, however, we are almost never interested in all possible fitness landscapes but in very specific landscapes. It is entirely possible to design a strategy that will outperform a random search in many practical fitness landscapes. In addition, the NFL theorems apply only to landscapes that are fixed or vary independently of an evolving population, whereas the fitness landscape in biological evolution varies with time as organisms change both themselves and their environments. Thus, Dembski's application of the NFL theorems is wrong on two counts.
In cosmology, intelligent-design advocates point to the supposed fine tuning of the physical constants and claim that life would not exist if any of several physical constants had been slightly different from their present values—for example, because the lifetime of the universe will be too short for stars to form. Chapter 12 criticizes this anthropic argument, which suggests that the physical constants of our universe were purposefully designed to produce human life. The chapter notes, first, that the claim inherently assumes only one possible kind of life: ours. Additionally, this chapter shows that many combinations of values of four physical constants will lead to a universe with a long-enough life for stars to form and hence for life to be a possibility.
Chapter 13 asks whether, after all, intelligent design is practiced as science. To this end, it shows how certain pathological sciences operate and how they differ from genuine science. Specifically, we argue that the advocates of intelligent design do not practice science, not because their ideas are religiously motivated but because they make no substantive predictions, do not respond to evidence, have an ax to grind, and appear to be oblivious to criticism. Further, we hoist Dembski by his own petard when we demonstrate that his intelligent designer is no more than a Z-factor, a term of derision he applies to certain speculative scientific theories.
Was this article helpful?