Solution They Hit the Singularity

Things do not change; we change. Henry David Thoreau, Walden

Back in 1965, Gordon Moore — the co-founder of the Intel Corporation — remarked how the number of transistors per square inch that one could fit on an integrated circuit seemed to double every 18 months.167 This remark became known as Moore's law, though of course it is more an observation than a law of Nature. In its current incarnation, Moore's law states that data density doubles every 18 months. The law has held true in the 36 years since its formulation, and certain other computing hardware performance measures have kept pace. The result: cheap, fast computing power is readily available — and it has changed our world. If the law continues to hold over the next decade, and there seems to be no reason why it should not, then we will continue to see even faster and more powerful machines.168

Vernor Vinge, extrapolating the improvements in computer hardware and other technologies over the next few decades, argues mankind will likely produce super-human intelligence some time before 20 30.169 He considers four slightly different ways in which science might achieve this breakthrough. We might develop powerful computers that "wake up"; computer networks, like the Internet, might "wake up"; human-computer interfaces might develop so users become super-humanly intelligent; and biologists may develop ways of improving the human intellect. Such a super-intelligent entity might be mankind's last invention, because the entity itself could design even better and more intelligent offspring. The doubling time of 18 months in Moore's law would steadily decrease, causing an "intelligence explosion." A quicker-than-exponential runaway event might end the human era in a matter of a few hours. Vinge calls such an event the Singularity.170

The term Singularity is unfortunate, in that mathematicians and physicists already use it in a specific sense: a singularity occurs when some quantity becomes infinite. At Vinge's Singularity, however, no quantity need become infinite. The name nevertheless captures the essence of what would be a critical point in history: things would change very rapidly at the Singularity, and — like the singularity in a black hole — it becomes hard to predict what happens after hitting it. The super-intelligent computers (or the super-intelligent humans or human-computer beings) turn into ... what? It is difficult, perhaps impossible, to imagine the capabilities and motives and desires of entities that are the product of this transcendental event.171

Vinge argues that if the Singularity is possible, then it will happen. It has something of the character of a universal law: it will occur whenever intelligent computers learn how to produce even more intelligent computers. If ETCs develop computers — since we routinely assume they will develop radio telescopes, we should assume they will develop computers — then the Singularity will happen to them, too. This, then, is Vinge's explanation of the Fermi paradox: alien civilizations hit the Singularity and become super-intelligent, transcendent, unknowable beings.

Vinge's speculations about the Singularity are fascinating. And as an explanation of the Fermi paradox, the suggestion improves on explanations requiring a uniformity of motive or circumstance. Not every ETC will blow itself up, or choose not to engage in spaceflight, or whatever. But we can argue reasonably that every technological civilization will develop com puting; and if computing inevitably leads to a Singularity, then presumably all ETCs will inevitably vanish in a Singularity. The ETCs are there, but in a form fundamentally incomprehensible to non-super-intelligent mortals like us. Nevertheless, as an explanation of the paradox, I think it has problems.

First, even if high intelligence can exist on a non-biological substrate, the Singularity might never happen.172 There are several reasons — economic, political, social — why a Singularity might be averted. There are also technological reasons why the Singularity might not occur. For example, for the attainment of the Singularity, advances in software will be at least as important as hardware advances. Without much more sophisticated software than we currently possess, the Singularity will just not happen. Now, while it is true that various hardware measures seem to obey Moore's law, improvements in software are much less spectacular. (The word processor I use is the latest version of the program. It certainly has more features than the version I was using ten years ago, but I never use those features. Indeed, the program is probably slightly less useful to me than it was ten years ago; I persevere with it because everyone else uses it and I need to exchange documents with people. The program I am using to typeset this book, which is called TeX, is a wonderful piece of software whose creator froze development on the program several years ago.173 While there is some progress in the worldwide TeX community toward an even better typesetting program, progress is much slower than would be the case if Moore's law were in operation. Of course, the kind of software required to create the "intelligence explosion" has nothing to do with word processors or typesetting programs. But the point is the same: advances in software and in software methodologies come at a much slower rate. We simply may not be smart enough to generate the software that will lead to a Singularity.) Perhaps we will see a future in which incredibly powerful machines do amazing things — but without self-awareness; surely this is at least as plausible as a future that contains a Singularity.

Even if a Singularity is inevitable, I fail to see how it explains the Fermi paradox. We can ask, as Fermi might: where are the super-intelligences? The motives and goals of a super-intelligent post-Singularity creature may be unknowable to us — but so, presumably, would the motives and goals of any "traditional" K3 civilizations that may exist. Yet we are happy to think about how to detect such K3 civilizations. (In fact, we may have more chance of understanding the post-Singularity beings on Earth than we would of understanding extraterrestrials, because in some sense those entities would be us. We would, in some sense, have created them and possibly imprinted upon them certain values.) Even if we are unable to understand or communicate with super-intelligent entities, it does not follow that those entities must disengage with the rest of the physical Universe. A super-intelligence must, like us, obey the laws of physics; and presumably it would make rational economic decisions. So the same logic that suggests that an advanced technological civilization would quickly colonize the Galaxy leads us to conclude that a super-intelligence would also colonize the Galaxy — except it would do so more quickly and efficiently than "normal" biological life-forms.

Even if they choose not to colonize, even if post-Singularity entities transcend our understanding of reality — they go off into other dimensions (page 122) or spend their time creating the child universes that Harrison proposed (page 59), or engage in any activity bar exploration of our Universe — there would be non-augmented, normal-intelligence beings left behind. In our case, I feel most of mankind would choose not take part in the Singularity. But it does not follow that we would become extinct. Unless the super-intelligences felt they had to destroy us (why would they bother?), we could go on living as we have always done. We might bear the same relation to the super-intelligent beings as bacteria do to us — but so what? Two billion years ago bacteria were the dominant form of life on Earth, and by many measures (species longevity, total biomass, ability to withstand global catastrophe, and so on) they still are. The existence of humans simply does not affect bacteria. In the same way, the existence of super-intelligent beings need not necessarily affect humanity; they could do their weird stuff, and we could continue doing ours. And the existence of super-intelligent beings does not affect our ability to communicate with like-minded ETCs.

To my mind, the existence of a Singularity does not explain the Fermi paradox. It exacerbates it!

0 0

Post a comment