We seek him here, we seek him there.
Baroness Orczy, The Scarlet Pimpernel
Even if ETCs are broadcasting radio signals, and we are tuned to the correct channels, where should we point our telescopes? The sky is large, and our resources are few. It would be tragic to train our telescopes on Canopus, say, if the civilization on Capella were trying to catch our attention.
We can employ two search strategies. A targeted search focuses upon individual nearby stars. It uses instruments of great sensitivity in the hope of detecting signals deliberately beamed toward us or leakage radiation that happens to pass our way. A wide-sky survey scans large areas of the celestial sphere and thus encompasses a myriad of stars. The sensitivity of a wide-sky survey is vastly inferior to a targeted search.
The earliest search for an ETC — Drake's Project Ozma — targeted just two stars: Tau Ceti and Epsilon Eridani. Of modern targeted searches, the best known is Project Phoenix: it targets a list of about a thousand old, Sunlike stars within a distance of 200 light years, and listens for signals within the range 1.2 to 3.0 GHz in channels just 0.7 Hz wide — so for each star more than 2.5 billion channels are checked. However, most of the large SETI projects currently in operation — such as SERENDIP, Southern SERENDIP and BETA — are wide-sky surveys. Future projects — such as the SETI League's plan to link the observations of 5000 small radio telescopes — will be wide-sky surveys.123 Targeted searches are a rarity; of today's major radio searches, only Project Phoenix employs a targeted strategy. Maybe we are employing our precious SETI resources in the wrong manner? Maybe we do not see ETCs because we are not searching with sufficient sensitivity? Should we not look hard and long and deep at planetary systems that might harbor life, instead of skimming across the sky?
Well, no. It turns out that the modern wide-sky surveys are doing the right thing. An analysis by Nathan Cohen and Robert Hohlfeld shows that we should play the numbers and look at as many stars as possible.124
In Nature, we often find that objects with a large value of some property are rare, while objects with a smaller value of that property are common. Thus, bright stars of spectral class O are few in number, while dim M-class stars are widespread. Strong radio sources like quasars are rare, while weak radio sources like stellar coronas are common. Which are we more likely to detect: the rare "bright" objects or the common "dim" objects? It depends on the strength of the rare sources compared to the common sources. For example, quasars are incredibly strong radio emitters; it does not matter that they are at extreme distances — they far outshine the closer but weaker stellar sources. Thus, radio telescopes in the early 1960s could detect rare, distant quasars more readily than common, nearby sources. In the same way, even if advanced ETCs are incredibly rare, Cohen and Hohlfeld showed that we are more likely to detect their beacons than weak signals from a host of ETCs not much more advanced than ourselves. (The only way to avoid this conclusion is if the stars are teeming with intelligent life. If ETCs are common, then a targeted search like Project Phoenix is likely to find one in its list of target stars.) Wide-sky surveys are therefore more likely to produce positive results; at the very least, when we pick targets for in-depth study, we should try to ensure that the receiving beam contains galaxies or large clusters of stars behind the target.
A "natural" frequency for intergalactic communication is represented by f = h To « 56.8 GHz, where To is the observed temperature of the cosmic background radiation, k is the Boltzmann constant, and h is the Planck constant (it thus links the regimes of cosmology and quantum physics). This frequency was originally proposed in 1973 by Drake and Sagan, and independently by Gott in 1982.
I have a tiny feeling of unease with the wide-sky surveys, and this harks back to the problem of the frequency at which we should listen. The surveys take in distant galaxies, and most surveys listen at or around the waterhole. But there is a better frequency than the waterhole for intergalactic (as opposed to interstellar) communication: 56.8 GHz. This frequency is tied to the observed cosmic microwave background, so it is a universal frequency. If an ETC in a distant high-redshift galaxy emitted a signal at a frequency related to the above, then it could be sure the signal might be received at any future time. The signal could potentially reach large numbers of galaxies.125 (There is another factor to consider here. On Earth it took about 4.5 billion years for a technological civilization to arise. If this is the time it takes other civilizations to arise, then — depending upon the exact details of the cosmological model one prefers — it is pointless looking at galaxies with redshifts much larger than 1. The light we now see from these distant galaxies set off when the Universe was only about 4.5 billion years old; there would not have been time for a K3 civilization to arise.) Unfortunately, Earth's atmosphere has a wide oxygen absorption band at 60 GHz, which means our radio telescopes cannot carry out a search at 56.8 GHz. Observations at this frequency will have to be performed from space. In the meantime, perhaps a K3 civilization in a faraway galaxy is signaling us
I cannot leave this discussion without mentioning one of the most innovative of recent scientific projects. Since Drake first pointed his radio telescope to Tau Ceti in the hope of finding a signal, engineers have improved the sensitivity of radio receivers by a factor of about 20, and astronomers have amassed much more knowledge about the birth and evolution of stellar systems. But the biggest development since the Project Ozma days has been the remarkable increase in available computing power. The [email protected] project, founded by David Gedye, has harnessed this power in a way that has captured the enthusiasm of the general public as perhaps no other scientific project has done.126 Participants download a small client program for their home or work computer. The program usually works as a screensaver; in essence, when the user's computer is not engaged in "proper" work, the client program comes to life and begins calculations on a packet of data — known as a work unit — taken by the Arecibo radio telescope. Once the calculations right now.
are complete, the program sends the work unit back to [email protected], where it is merged with all the other results from around the world, and a new work unit is downloaded. More than a million CPUs have crunched data from Arecibo, and they have combined to make [email protected] the world's largest and most powerful virtual computer.127 This immense computing power has enabled astronomers to make one of the most finely tuned searches for ETCs ever attempted: the program looks at data from a band with a width of 2.5 MHz centered at the 1420 MHz hydrogen line, and examines channels as narrow as 0.07 Hz.
New projects like [email protected] — and traditional projects like SERENDIP and BETA — seem to have got the search strategy right: look at wide areas of the sky, across billions of stars, and hope that somewhere in that vast collection we find a very rare yet very powerful transmission.
So far, we have heard nothing.
Was this article helpful?