Control of Self Replicating Systems

Whenever engineers discuss the technology and role of self-replicating systems, their conversations inevitably turn to the interesting question: What happens if a self-replicating system (SRS) becomes out of control? Before human beings seed the solar system or interstellar space with even a single SRS unit, engineers and mission planners should know how to pull an SRS unit's plug if things grow out of control. Some engineers and scientists have already raised a very legitimate concern about SRS technology. Another question that robot engineers often encounter concerning SRS technology is whether smart machines represent a long-range threat to human life. In particular, will machines evolve with such advanced levels of artificial intelligence that they become the main resource competitors and adversaries of human beings—whether the ultrasmart machines can replicate or not? Even in the absence of advanced levels of machine intelligence that mimic human intelligence, the self-replicating system might represent a threat just through its potential for uncontrollable exponential growth.

These questions can no longer remain entirely in the realm of science fiction. Engineers and scientists must start examining the technical and social implications of developing advanced machine intelligences and self-replicating systems before they bring such systems into existence. Failure to engage in such prudent and reasonable forethoughts could promote a future situation (now very popular in science fiction) in which human beings find themselves in a mortal conflict over planetary (or solar system) resources with their own intelligent machine creations.

Of course, human beings definitely need smart machines to improve life on Earth, to explore the solar system, to create a solar-system civilization, and to probe the neighboring stars. So engineers and scientists should proceed with the development of smart machines, but they should also temper these efforts with safeguards to avoid the ultimate undesirable future situation in which the machines turn against their human masters and eventually enslave or exterminate them. In 1942, the science-fact-fiction writer Isaac Asimov (1920-92) suggested a set of rules for robot behavior in his science-fiction story "Runaround," which appeared in Astounding magazine.

Over the years, Asimov's laws have become part of the cult and culture of modern robotics. They are: (Asimov's First Law of Robotics) "A robot may not injure a human being, or, through inaction, allow a human being to come to harm;" (Asimov's Second Law of Robotics) "A robot must obey the orders given it by human beings except where such orders would conflict with the first law;" and (Asimov's Third Law) "A robot must protect its own existence as long as such protection does not conflict with the first or second law." The message within these so-called laws represents a good starting point in developing benevolent, people-safe, smart machines.

However, any machine that is sophisticated enough to survive and reproduce in largely unstructured environments would probably also be capable of performing a certain degree of self-reprogramming, or automatic improvement (that is, have the machine behavior of evolution). An intelligent SRS unit eventually might be able to program itself around any rules of behavior that were stored in its memory by its human creators. As it learned more about its environment, the smart SRS unit might decide to modify its behavior patterns to better suit its own needs. If this very smart SRS unit really "enjoys" being a machine and making (and perhaps improving) other machines, then when faced with a situation in which it must save a human master's life at the cost of its own, the smart machine may decide to simply shut down instead of performing the life-saving task that it was preprogrammed to do. Thus, while it did not harm the endangered human being, it did not help the person out of danger either. Viewed on a larger scale, an entire population of "people-safe" robots passively might allow the human race to collapse and then fill in the intelligence void in this corner of the galaxy.

Science fiction contains many interesting stories about robots, androids, and even computers turning on their human builders. The conflict between the human astronaut crew and the interplanetary spaceship's feisty computer, HAL, in Arthur C. Clarke and Stanley Kubrick's cinematic masterpiece 2001: A Space Odyssey is an incomparable example. The purpose of this brief discussion is not to invoke a Luddite-type response against the development of very smart robots but only to suggest that such exciting research and engineering activities be tempered by some forethought concerning the potential technical and social impact of these developments both here on Earth and throughout the galaxy.

One or all of the following techniques might control an SRS population in space. First, the human builders could implant machine-genetic instructions (deeply embedded computer code) that contained a hidden or secret cutoff command. This cutoff command would be activated automatically after the SRS units had undergone a predetermined number of replications. For example, after each machine replica is made, one regeneration command could be deleted—until, at last, the entire replication process is terminated with the construction of the last (predetermined) replica.

Second, a special signal from Earth at some predetermined emergency frequency might be used to shut down individual, selected groups, or all SRS units at any time. This approach is like having an emergency stop but ton which when pressed by a human being causes the affected SRS units to cease all activities and go immediately into a safe, hibernation posture. Many modern machines have either an emergency stop button, flow cutoff valve, heat limit switch, or master circuit breaker. The signal activated "all-stop" button on an SRS unit would just be a more sophisticated version of this engineered safety device.

For low-mass SRS units (perhaps in the 200-pound [100-kg] to 10,000-pound [4,500-kg] class), population control might prove more difficult because of the shorter replication times when compared with much-larger-mass SRS factory units. To keep these mechanical critters in line, human managers might decide to use a predator robot. The predator robot would be programmed to attack and destroy only the type of SRS units whose populations were out of control due to some malfunction or other. Robot engineers have also considered SRS unit population control through the use of a universal destructor (UD). This machine would be capable of taking apart any other machine that it encountered. The universal destructor would recover any information found in the prey robot's memory prior to recycling the prey machine's parts. Wildlife managers on Earth today use (biological) predator species to keep animal populations in balance; similarly, space robot managers in the future could use a linear supply of nonreplicating machine predators to control an exponentially growing population of misbehaving SRS units.

Engineers might also design the initial SRS units to be sensitive to population density. Whenever the smart robots sensed overcrowding or overpopulation, the machines could lose their ability to replicate (that is, become mechanically infertile), stop their operations, and go into a hibernation state—or perhaps (like lemmings on Earth) report to a central facility for disassembly. Unfortunately, SRS units might mimic the behavior patterns of their human creators too closely, so without preprogrammed behavior safeguards, overcrowding could force such intelligent machines to compete among themselves for dwindling supplies of resources (terrestrial or extraterrestrial). Dueling, mechanical cannibalism, or even some highly organized form of robot-versus-robot conflict might result.

Hopefully, future human engineers and scientists will create smart machines that mimic only the best characteristics of the human species, for it is only in partnership with very smart and well-behaved self-replicating systems that human beings can some day hope to send a wave of life, conscious intelligence, and organization through the Milky Way galaxy.

In the very long term, there appear to be two general pathways for the human species: either human beings are a very important biological stage in the overall evolutionary scheme of matter and energy in the universe, or else humans are an evolutionary dead end. If the human race decides to limit itself to just one planet (Earth), a natural disaster or our own foolhardiness will almost certainly terminate the species—perhaps in just a few centuries or a few millennia from now. Excluding such unpleasant natural or human-caused catastrophes, without an extraterrestrial frontier, a planetary society will simply stagnate due to isolation, while other intelligent alien civilizations (as may exist) flourish and populate the galaxy.

Replicating robot system technology offers the human race very interesting options for the spread of life beyond the boundaries of Earth. Future generations of human beings might decide to create autonomous, self-replicating robot probes (von Neumann probes) and send these systems across the interstellar void on missions of exploration. Future generations of humans otherwise could elect to develop a closely knit (symbiotic) human-machine system—a highly automated interstellar ark—that is capable of crossing interstellar regions and then replicating itself when it encounters star systems with suitable planets and resources.

According to some scientists, any intelligent civilization that desires to explore a portion of the galaxy more than 100 light-years from their parent star would probably find it more efficient to use self-replicating robot probes. This galactic exploration strategy would produce the largest amount of directly sampled data about other star systems for a given period of exploration. One estimate suggests that the entire galaxy could be explored in about one million years, assuming the replicating interstellar probes could achieve speeds of at least one-tenth the speed of light. If other alien civilizations (should such exist) follow this approach, then the most probable initial contact between extraterrestrial civilizations would involve a self-replicating robot probe from one civilization encountering a self-replicating probe from another civilization.

If these encounters are friendly, the probes could exchange a wealth of information about their respective parent civilizations and any other civilizations previously encountered in their journeys through the galaxy. The closest terrestrial analogy would be a message placed in a very smart bottle that is then tossed into the ocean. If the smart bottle encounters another smart bottle, the two bump gently and provide each other a copy of their entire content of messages. One day, a beachcomber finds a smart bottle and discovers the entire collection of messages from across the world's oceans that has accumulated within.

If the interstellar probes have a hostile, belligerent encounter, they will most likely severely damage or destroy each other. In this case, the journey through the galaxy ceases for both probes and the wealth of information about alien civilizations, existent or extinct, vanishes. Returning to the simple-message-in-smart-bottle analogy here on Earth, a hostile encounter damages both bottles, they sink to the bottom of the ocean, and their respective information contents are lost forever. No beachcomber will ever discover either bottle and so will never have the chance of reading the interesting messages contained within.

One very distinct advantage of using interstellar robot probes in the search for other intelligent civilizations is the fact that these probes could also serve as a cosmic safety deposit box, carrying information about the technical, social, and cultural aspects of a particular civilization through the galaxy long after the parent civilization has vanished. The gold-anodized records that NASA engineers included on the Voyager 1 and 2 spacecraft and the special plaques they placed on the Pioneer 10 and 11 spacecraft are humans' first attempts at achieving a tiny degree of cultural immortality in the cosmos. (Chapter 9 discusses these spacecraft and the special messages they carry.)

Star-faring, self-replicating machines should be able to keep themselves running for a long time. One speculative estimate by exobiologists suggests that there may exist at present only 10 percent of all alien civilizations that ever arose in the Milky Way galaxy, the other 90 percent having perished. If this estimate is correct, then—on a simple statistical basis—nine out of every 10 robotic star probes within the galaxy could be the only surviving artifacts from long-dead civilizations. These self-replicating star probes would serve as emissaries across interstellar space and through eons of time. Here on Earth, the discovery and excavation of ancient tombs and other archaeological sites provides a similar contact through time with long-vanished peoples.

Perhaps later this century, human space explorers and/or their machine surrogates will discover a derelict alien robot probe or will recover an artifact the origins of which are clearly not from Earth. If terrestrial scientists and cryptologists are able to decipher any language or message contained on the derelict probe (or recovered artifact), humans may eventually learn about at least one other ancient alien society. The discovery of a functioning or derelict robot probe from an extinct alien civilization may also lead human investigators to many other alien societies. In a sense, by encountering and interrogating successfully an alien robot star probe, the human team of investigators may actually be treated to a delightful edition of the proverbial Encyclopedia Galactica—a literal compendium of the technical, cultural, and social heritage of thousands of extraterrestrial civilizations within the galaxy (most of which are probably now extinct). (Chapter 11 also addresses the issue of alien contact.)

There are a number of interesting ethical questions concerning the use of interstellar self-replicating probes. Is it morally right, or even equitable, for a self-replicating machine to enter an alien star system and harvest a portion of that star system's mass and energy to satisfy its own mission objectives? Does an intelligent species legally "own" its parent star, home planet, and any material or energy resources residing on other celestial objects within its star system? Does it make a difference whether the star system is inhabited by intelligent beings? Or is there some lower threshold of galactic intelligence quotient (GIQ) below which star-faring races may ethically (on their own value scales) invade an alien star system and appropriate the resources that are needed to continue on their mission through the galaxy? If an alien robot probe enters a star system to extract resources, by what criteria does the smart machine judge the intelligence level of any indigenous life-forms? Should this smart robot probe avoid severely disturbing or contaminating existing life-bearing ecospheres?

Further discussion about and speculative responses to such intriguing SRS-related questions extends far beyond the scope of this chapter. However, the brief line of inquiry introduced here cannot end without at least mention of the most important question in cosmic ethics: Now that the human species has developed space technology, are humans and their solar system above (or below) any galactic appropriations threshold?

Telescopes Mastery

Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

Get My Free Ebook


Post a comment