Solution A Choice of Catastrophes

...we make guilty of our disasters the sun, the moon, and the stars; as if we were villains on necessity, fools by heavenly compulsion.

William Shakespeare, King Lear, Act I, Scene 2

An obvious, if gloomy, resolution of the Fermi paradox occurs if L — the factor denoting the lifetime of the communicating phase of an ETC — is small. In Chapter 51 shall deal with various ways in which Nature is hostile to life. Here, though, I want to examine the idea that intelligent species may be the inevitable authors of their own doom.152

figure 39 A 350-kTon thermonuclear explosion (mid-1950s).

To more than a few scientists working during the Cold War it seemed quite certain that ETCs would discover the interesting properties of element 92 (known to us as uranium) and therefore learn how to construct nuclear weapons. For several scientists, then, the reason for a short lifetime (in other words, a small value for L) was obvious: all advanced civilizations inevitably annihilate themselves in a nuclear holocaust, as the human race was apparently on the verge of demonstrating.153

It hardly seems worth mentioning that, depending upon the severity of a nuclear war, the extinction of an intelligent species might follow.154 The world's arsenals still contain many thousands of nuclear weapons, and if they were ever used in large numbers, then they would certainly destroy Homo sapiens. A limited nuclear war might be just as ruinous for our species, due to the effects of a potential global nuclear winter.155

Nevertheless, as many SF writers have demonstrated, it is possible to imagine scenarios in which members of a warring species survive a limited war and, over a period of thousands of years, recreate their civilization. One of the earliest post-apocalyptic novels, and certainly one of the best, is Miller's A Canticle for Liebowitz. Miller describes how a flicker of knowledge is preserved by monks after a nuclear war has decimated the population.156 In Canticle, mankind eventually rediscovers the power of science and, a few millennia after the first nuclear holocaust, has "advanced" to the stage where the Bomb can be dropped once again. Is the urge to war so deeply ingrained that a civilization learns nothing? Are civilizations somehow compelled to drop bombs as soon as they can? Unless that is the case, limited nuclear war cannot provide an explanation of the paradox. It may take many thousands of years to recover a high level of civilization after limited nuclear war, but this timescale is short — just a few minutes of the Universal Year.

Even a total, all-out, no-holds-barred nuclear war would not destroy all life on a planet. Consider the organism Deinococcus radiodurans. Scientists first isolated it in 1956 from a can of ground beef; the beef had been radiation-sterilized, but the meat still spoiled. It turns out that D. radiodurans can survive an exposure to gamma-radiation of 1.5 million rads. (For comparison, a dose of 1000 rads is usually enough to kill a man.) Exposure to intense radiation blasts apart its DNA — but within a few hours the organism reforms its entire genome with seemingly no deleterious effects. This organism can withstand other extreme conditions, such as prolonged desiccation, which explains why it is often called "Conan the Bacterium." A nuclear war would not unduly inconvenience Conan the Bacterium. And not just bacteria would survive; various other organisms would live through a nuclear war. If intelligence is an inevitable outcome of evolution (this is contentious, as we shall see, but it is presumably the viewpoint of those who argue there are a million ETCs in the Galaxy,) then the wait for intelligence to emerge after a nuclear holocaust would not be endless: a few hundreds of millions of years, perhaps. This is an unimaginably vast reach of time on a human scale, but, again, it is not particularly significant when compared to the age of the Galaxy — a few days in the Universal Year.

Those civilizations that avoid the Scylla of nuclear war must still navigate the Charybdis of biological and chemical warfare. For example, we know that chemical weapons can be used to destabilize ecosystems; genetically engineered biological weapons can destroy food supplies or decimate populations directly. But the comments made above regarding nuclear war also hold for these forms of warfare. Is it likely that every ETC civilization, when it reaches a certain stage (and before it establishes colonies in space), annihilates itself through warfare? Without wishing to tempt fate, we can hope that Homo sapiens has shown that at least one species in the Galaxy can resist the urge to self-destruct through war.

figure 40 The organism Deinococcus radiodurans growing on a nutrient agar plate. This bacterium can survive extremes of radiation and desiccation.

Overpopulation

One of the defining characteristics of life on Earth is reproduction. Presumably this is a universal characteristic of life. If we ever meet the equivalent of the Krell from Forbidden Planet, the Soft Ones from The Gods Themselves, or the Greeshka from A Song for Lya, then we may be surprised by the mechanics of their reproductive processes — but not the fact that they reproduce. And since aliens will reproduce, they will be subject to the same simple mathematical laws that describe population growth here on Earth.

Until about 8000 BC, the number of people on Earth at any time never exceeded about 10 million people. Health was poor and living conditions were harsh; life expectancy at birth was probably 30 years or less. Had the birth rate not been as high as the death rate, human society would have died out; for the continued existence of families, clans and tribes it was vital that adults had as many children as possible. Even so, the rate of population growth was barely above zero. The situation began to change when mankind developed agriculture. Life expectancy began to increase under an agricultural lifestyle, and the birth rate began to exceed the death rate. (People are generally very quick to adopt new technologies; social attitudes, such as "be fruitful and multiply," are slower to change. So although the reasons for having large families had lessened, social pressures on parents had not.) Fortunately, agriculture supported a greater population than the old hunter-gatherer way of life; by 1650, the world's population was 0.5 billion — a 50-fold increase over the steady-state population size for 99% of human history. By around 1800, the world's population reached its first billion — a doubling in 150 years. By 1930, the population was 2 billion — a doubling in 130 years. By 1975, the population was 4 billion — a doubling in just 45 years. The world's population exceeded 6 billion in September 1999.

To say that this past rate of population growth cannot continue is to risk being labeled a Cassandra. But it cannot continue. Really. At those growth rates, in a few hundred years the combined flesh of humanity would form a sphere expanding at the speed of light. (Of course, this would not happen; if we did not slow the growth rate, then biology would curb it for us long before relativistic effects become apparent.)

If we are lucky, the world's population will in the next few decades reach a new steady state, with a low death rate matched by a low birth rate. (Though even this would not satisfy the Cassandras, since there are downsides to this situation. For example, the elderly would consume a large share of costly public services, while there would be fewer young people to work and pay for them.) The steady-state population will probably be in the range 11 to 13 billion. Whether Earth can feed so many people and offer them a reasonable standard of life is not known. But even if it can, what damage will 13 billion people inflict upon it? A much smaller population has managed to transform or degrade up to half of Earth's land surface for agricultural and urban use; we have increased the atmospheric CO2 concentration at an alarming rate; we already use more than half of the accessible surface's fresh water; the natural rate at which species become extinct has accelerated due to human activity; and so on, and so on. None of these problems (not to mention problems such as poverty and social injustice) are caused solely by overpopulation; but overpopulation certainly does not help in the search for solutions.

Since alien life will reproduce, it seems inevitable that at some stage an ETC will face a population crisis. But will every civilization fail to negotiate the crisis?

The Gray Goo Problem

Nanotechnology seems as if it might be the natural outcome of converging advances in many different fields of knowledge.157 The term refers to engineering that takes place at the nanoscale, a scale where the dimensions of objects are typically measured in nanometers (billionths of a meter). Since molecules are of this size, it also goes by the name of molecular engineering. Future nanotechnologists will have the ability to assemble custom-made molecules into large, complex systems; their capacity to create materials will be almost magical. (Since this capacity appears to be so wonderful, and yet is presently far beyond our abilities, several commentators are skeptical of nanotechnology. So it is worth emphasizing that there seems to be no fundamental reason why we cannot develop the technology. Nature herself is a "nangeneer": enzymes, for example, are nanotechnolog-ical devices that employ biochemical techniques to carry out their tasks. If Nature can do it, so can we. It is also worth emphasizing that the success or failure of nanotechnology will determine whether we ever develop Bracewell-von Neumann probes.)

One of the elements of any future nanotechnology is likely to be the nanorobot — or nanobot, for short. Although their development is a long way off, theoretical studies suggest we could construct nanobots from one of several materials — with carbon-rich diamondoid materials perhaps forming the basis for many types of nanobot. Studies also suggest that one of the most useful types of nanobot will be a self-replicating machine.

Alarm bells start to ring whenever self-replication is mentioned. The danger inherent in producing a self-replicating nanobot in the laboratory is clear upon answering the following question: What happens when a nanobot escapes into the outside world? In order to replicate, a nanobot made of a carbon-rich diamondoid material would need a source of carbon. And the best source of carbon would be the Earth's surface biosphere: plants, animals, humans — living things in general. The swarms of nanobots (for soon there would be many copies of the original) would dismantle the molecules in living material and use the carbon to produce more copies of themselves. The surface biosphere would be converted from the rich, varied environment we see today into a sea of ravenous nanobots plus waste sludge. This is the gray goo problem.

As mentioned above in the discussion on overpopulation, exponential growth is a powerful thing. Freitas has shown that, under ideal conditions, a population of nanobots growing exponentially could convert the entire surface biosphere in less than three hours!158 We can add this, then, to the depressing list of ways in which the lifetime of the communicating phase of an ETC might be shortened: a laboratory accident, involving the escape of a nanobot, turns their biosphere into sludge.

This solution to the paradox, which has been seriously proposed, suffers the same problem as many other solutions: even if it can occur it is not convincing as a "universal" solution. Not every ETC will succumb to the gray goo.

The young boy in Woody Allen's Annie Hall becomes depressed at the thought that the Universe is going to die, since that will be the end of everything. I am becoming depressed writing this section, so to cheer up myself — and any young Woodys that might be reading — I think we have to ask whether the gray goo problem is even remotely likely to arise. As Asi-mov was fond of pointing out, when man invented the sword he also invented the hand guard so that one's fingers did not slither down the blade when one thrust at an opponent. The engineers who develop nanotechnol-ogy are certain to develop sophisticated safeguards. Even if self-replicating nanobots were to escape or if they were released for malicious reasons, then steps could be taken to destroy them before catastrophe resulted. A population of nanobots increasing its mass exponentially at the expense of the biosphere would immediately be detected by the waste heat it generated. Defense measures could be deployed at once. A more realistic scenario, in which a population of nanobots increased its mass slowly, so the waste heat they generated was not immediately detectable, would take years to convert Earth's biomass into nanomass. That would provide plenty of time to mount an effective defense. The gray goo problem might not be such a difficult problem to overcome: it is simply one more risk that an advanced technological species will have to live with.

Particle Physics — A Dangerous Discipline?

In 1999, the London Times reported that experiments at the new Relativis-tic Heavy Ion Collider (rhic) on Long Island might trigger a catastrophe. Physicists at the RHIC accelerate gold nuclei to high energies and then smash them into each other; it is an effective way of learning about the fundamental constituents of matter. The RHIC experiments, it was suggested, might destroy Earth. This immediately led some to suggest another of the "doomsday" solutions to the Fermi paradox: advanced civilizations learn to experiment in high-energy particle physics, and destroy themselves when an experiment goes wrong.159

Such concerns are not new. In 1942, Teller wondered whether the high temperatures in a nuclear explosion might trigger a self-sustaining fire in Earth's atmosphere. Calculations by physicists, including Fermi, put minds to rest: a nuclear fireball cools too quickly to set the atmosphere on fire.

The flurry of concern with the RHIC began when someone calculated

figure 41 Physicists study particle interactions at laboratories like cern. Particles are accelerated to high energies in circular tunnels deep underground, and are then smashed into each other. (The cern tunnels, like the one shown here, are underneath the Jura mountains.) Neither at cern nor at rhic are the energies remotely high enough to pose a threat to our existence.

figure 41 Physicists study particle interactions at laboratories like cern. Particles are accelerated to high energies in circular tunnels deep underground, and are then smashed into each other. (The cern tunnels, like the one shown here, are underneath the Jura mountains.) Neither at cern nor at rhic are the energies remotely high enough to pose a threat to our existence.

that the energies involved in the experiments would be enough to create a tiny black hole. The fear was that the black hole would tunnel down from Long Island to Earth's center and proceed to devour our planet. Fortunately, as more sensible calculations quickly showed, there is essentially no chance of this happening. To create the smallest black hole that can exist requires energies about 10 million billion times greater than the RHIC can generate.160 (Even if a particle accelerator could generate such energies, the black hole it produced would be a puny thing indeed, with only a fleeting existence. It would struggle to consume a proton, let alone Earth.)

So we can sleep soundly, safe in the knowledge that the RHIC will not produce a black hole. We can rest assured, too, that it will not destroy Earth through the production of strangelets — chunks of matter containing so-called strange quarks in addition to the usual arrangement of quarks.161 So far no one has seen strangelets, but physicists wondered whether experiments at the RHIC might produce them. If strangelets were produced, then there is a risk they might react with nuclei of ordinary matter and convert them into strange matter — a chain reaction could then transmute the entire planet into strange matter. However, having raised the possibility of catastrophe, physicists were quick to reassure everyone. Calculations show that strangelets are almost certainly unstable; even if they are stable, the RHIC would almost certainly not have the energy to create them; and even if they were created at the RHIC, their positive charge would cause them to be screened from interactions by a surrounding electron cloud.162

The unlikely litany of catastrophes that the RHIC (and other particle accelerators) might inflict upon us does not end with black holes and strangelets. Paul Dixon, a psychologist with only a hazy grasp of physics, believes collisions at the Tevatron particle accelerator at Fermilab might trigger the collapse of the quantum vacuum state.

A vacuum is simply a state of least energy. According to current cos-mological theories, the early Universe may have briefly become trapped in a metastable state: a false vacuum. The Universe eventually underwent a phase transition into the present "true" vacuum, unleashing in the process a colossal amount of energy — it is similar to what happens when steam undergoes a phase transition to form liquid water. But what if our present vacuum is not the "true" vacuum? Rees and Hut published a paper in 1983 suggesting this could be the case.163 If a more stable vacuum exists, then it is possible for a "jolt" to cause our Universe to tunnel to the new vacuum — and the point at which the jolt occurs would see a destructive wave of energy spread outward at the speed of light. The very laws of physics would change in the wake of the wave of true vacuum.

Dixon thought that experiments at the Tevatron might cause a jolt that could collapse the vacuum. He was so worried he took to picketing Fer-

milab with a homemade banner saying "Home of the next supernova."164 Once again, however, we need not worry unduly about an accelerator-induced apocalypse. As Rees and Hut themselves pointed out in their original paper, through the phenomenon of cosmic rays Nature has been carrying out particle-physics experiments for billions of years at energies much higher than anything mankind can achieve.165 If high-energy collisions made it possible for the Universe to tunnel to the "true" vacuum — well, cosmic rays would have caused the tunneling to occur long ago.

The concept of an accelerator accident causing the destruction of a world (or the whole Universe, in the case of a vacuum collapse) is really a non-starter. The physics of these events is not known perfectly — that is why physicists are carrying out the research — but they are well enough known for us to realize that the doom-merchants have it wrong in this case. We have to look elsewhere for a resolution of the paradox.

Doomsday and the Delta t Argument

There are many ways in which mankind might destroy itself. In addition to the calamities discussed above, one could add genetic deterioration, over-stabilization, epidemics or a dozen other problems. And this is without mentioning the many external factors that threaten us, such as meteor impact, solar variability and gamma-ray bursters. It barely seems worth getting out of bed in the morning. Surely, though, an intelligent species like Homo sapiens will learn how to navigate these problems? Remarkably, there is a line of reasoning, called the delta t argument, that suggests not.

In 1969, when he was a student, Richard Gott visited the Berlin Wall. He was on vacation in Europe at the time, and his visit to the Wall was one of several excursions; he had seen the 4000-year-old Stonehenge, for example, and was suitably impressed. As he looked at the Wall, he wondered whether this product of the Cold War would stand as long as Stonehenge. A politician skilled in the nuances of Cold War diplomacy and knowledgeable about the relative economic and military strength of the opposing sides might have made an informed estimate (which, judging by the track record of politicians, would have been wrong). Gott had no such special expertise, but he reasoned in the following way:166

First, he was there at a random moment of the Wall's existence. He was not there to see the construction of the Wall (which happened in 1961), nor was he there to see the demolition of the Wall (which we now know happened in 1989); he was simply there on vacation. Therefore, he continued, there was a 50:50 chance that he was looking at the Wall during the middle two quarters of its lifespan. If he was there at the beginning of this interval, then the Wall must have existed for 4 of its lifespan, and | of its lifespan remained. In other words, the Wall would last 3 times as long as it already had existed. If he was there at the end of this interval, then the Wall must have existed for 4 of its lifespan, and only 4 was left. In other words, the Wall would last only ^ as long as it already had existed. The Wall was 8 years old when Gott saw it. He therefore predicted, in the summer of 1969, that there was a 50% chance of the Wall lasting a further 23 to 24 years (8 x 1 years to 8 x 3 years). As anyone who saw the dramatic television pictures will remember, the Wall came down 20 years after his visit — within the range of his prediction.

figure 42 An illustration of Gott's prediction that the Berlin Wall would last for another 2 years 8 months to 24 years after he first saw it in 1969.

Gott says the argument he used to estimate the lifetime of the Berlin Wall can be applied to almost anything. If there is nothing special about your observation of a thing, then, in the absence of relevant knowledge, that thing has a 50% chance of lasting between 3 to 3 times its present age.

In physics, it is standard practice to talk about predictions that have a 95% chance of being correct, rather than a 50% chance. Gott's argument remains the same, but there is a slight change in the numbers: if there is nothing special about your observation of an entity, then that entity has a 95% chance of lasting between 39 to 39 times its present age. (It is important when applying Gott's rule to remember that the observation must not have any particular significance. Imagine you have been invited to a wedding and, at the reception, you start chatting to a couple you have never seen before. If they tell you they have been happily married for ten months, then you can inform them their marriage has a 95% chance of lasting between just over a week to 321 years. On the other hand, you can predict nothing about how long bride and groom will be together: you are at the wedding precisely in order to observe the beginning of the marriage. The flaw in applying the rule to funerals should be obvious.)

figure 43 A hole in the Wall. There is a remarkable argument that links the lifespan of the Berlin Wall to the lifespan ofour species!

Using the delta t argument to estimate the longevity of concrete walls and human relationships is amusing, but we can use it to estimate something more serious: the future longevity of Homo sapiens. Recent research suggests our species is about 175,000 years old. Applying Gott's rule, we find there is a 95% chance that the future lifetime of our species is between about 4500 years and 6.8 million years. That would make the longevity of our species somewhere between about 0.18 and 7 million years. (Compare this with the average longevity for mammalian species, which is about 2 million years. Our closest relatives, Homo neanderthalensis, survived for maybe 200,000 years; Homo erectus, another Hominid species and possibly one of our direct ancestors, lasted for 1.4 million years. So Gott's estimate is certainly in the right ballpark for species longevity.) The argument does not say how we are going to meet our end; it could be by one or more of the methods discussed above, or by something quite different. The argument simply says that it is highly likely our species will perish some time between 4500 years and 6.8 million years from now.

If this is the first time you have met Gott's argument, then you may well think (as I confess I did) that it is nonsense. However, it is difficult to pinpoint exactly where the logic is faulty. The "obvious" objections to the argument have been robustly refuted. Before examining possible objections to Gott's line of reasoning, and looking at the implications of the delta t argument for the Fermi paradox, it is worth considering a slightly different version of the same idea.

Imagine you are a contestant on a new TV game show. The rules of the game are simple. Two identical urns are put in front of you and the host tells you one urn contains 10 balls and the other contains 10 million balls. (The balls are small.) The balls in each urn are numbered sequentially (1, 2, 3, ..., 10 in one urn; 1, 2, 3, ..., 10,000,000 in the other). You take a ball at random from the right urn and find the ball is number 7, say. The point of the game is for you to bet whether the right urn contains 10 balls or 10 million. The odds are not 50:50. Clearly, it is far more likely that a single-digit ball comes from the urn with 10 balls than from the urn with 10 million. Surely, you would bet accordingly.

Now, instead of two urns consider two possible sets of the human race, and instead of numbered balls consider individual human beings numbered according to their date of birth (so Adam is 1, Eve is 2, Cain is 3, and so on). If one of these sets corresponds to the real human race, then my personal number will be about 70 billion — as will be any of the readers of this book, since of the order of 70 billion people have lived since the beginning of our species. Now use the same argument as we did with the urns: it is much more likely you will have a rank of 70 billion if the total number of humans who will ever live is, say, 100 billion than it is if the total number is 100 trillion. If you were forced to bet, you would have to say it is likely only a few more tens of billions of people will live. (A few tens of billions of people sounds a lot, but at the present rate we add a billion people to Earth's population every decade.)

The delta t argument is an extension of the Copernican principle. The traditional Copernican principle says we are not located at a special point in space; Gott argues we are not located at a special point in time. An intelligent observer, such as you, Gentle Reader, should consider yourself to be picked at random from the set of all intelligent observers (past, present and future), any one of whom you could have been. If you believe mankind will survive into the indefinite future, colonize the Galaxy, and produce 100 trillion human beings, you have to ask yourself: why is it that I am lucky enough to be among the first 0.07% of people who will ever live?

Gott uses the same type of probabilistic argument to deduce a variety of features of Galactic intelligence, some of which are directly relevant to the Fermi paradox. They all depend upon the idea that you are a random intelligent observer — with no special location in either space or time. First, the colonization of the Galaxy cannot have occurred on a large scale by ETCs (because if it had, you — yes, you — would probably be a member of one of those civilizations). Second, applying the delta t argument to the past longevity of radio technology on Earth and combining this with the Drake equation, Gott finds at the 95% confidence level that the number of radio-transmitting civilizations is less than 121 — and possibly much less than this, depending upon the parameters fed into the Drake equation. Third, if there is a large spread in the populations of ETCs, then you probably come from an ETC having a population larger than the median. Thus, ETCs with populations much larger than our own must be rare — rare enough that their individuals do not dominate the total number of beings, otherwise you would be one of them. From which we deduce there is probably not a K2 civilization to be found in the Galaxy, nor a K3 civilization anywhere in the observable Universe.

As I indicated earlier, there seems to be something not quite right with the argument; it feels wrong — but where exactly is it wrong? There are philosophical opinions both for and against Gott's doomsday argument, and perhaps the safest course of action is to let the philosophers slug it out. Personally, though, I am uneasy with the assumption that intelligent species necessarily have a finite lifespan; recent observations indicate the Universe may expand forever, in which case it is possible for mankind to survive forever (in which case a straightforward application of a doomsday argument is problematic). What is the definition of mankind in this case anyway? When, exactly, does Gott believe mankind "started"? And if our species evolves into something else, does that count as the end of mankind?

This section has discussed one of the most frequently proffered solutions to the Fermi paradox: ETCs do not long stay in the radio-transmitting phase — much less the colonization phase — because they perish. There is a variety of ways this can happen, but are any of them inevitable? For this explanation to work, catastrophe must be unavoidable.

Was this article helpful?

0 0

Post a comment