[351] If intelligent life is common in the Universe, many species may have attempted to discover their neighbors. We are beginning to make our first attempts, our search may have to grow by orders of magnitude before we succeed.
If intelligent life is common in the Universe, it seems unlikely that the millions (or billions) of advanced cultures in our Galaxy will all go through their entire histories totally isolated from one another. Surely, as their convictions grow that theirs is not the only advanced society, many will attempt to discover their neighbors. In fact, many may have done so already. We are beginning to make our first attempts; our search may have to grow by orders of magnitude before we succeed.
How shall we best proceed? How can we have the greatest chance of success at each stage for the time, energy, and effort expended? We present here some of the constraints imposed by the magnitude of the problem and by physical law. It turns out that enormous times or energies are required for interstellar travel; unless we do it properly, large energies are needed for contact by radiated signals. The principle of least energy expenditure, and therefore of least cost, leads us to a preferred part of the radio spectrum, but several alternative modes of examining this region remain. Many of these modes cannot be ignored, but because we cannot do everything, we will be forced, as in any search, to give our highest priorities to those modes deemed to have the highest a priori probability of success based on all we know. We can never design a search strategy based on what we do not know.
Decades of science fiction have lulled many of
us into accepting inter stellar spaceflight as a reality for "them"
and a near reality for us. In fact, interstellar travel requires such
enormous expenditures of time or energy that it may not exist, or it
may be attempted only in
extremis. Since the nearest star is
over 4 light years from us, any round-trip interstellar flight
completed in a human working lifetime (T) of, say, 40 years requires
a ship speed (v) at least 1/5 that of light (c). Such a ship is far
beyond our present technology. But let us assume it is not beyond
theirs and ask how much energy an ideal rocket would require. We will
express speeds as fractions of the speed of light (
= v/c) and
energies as fractions of the energy equivalent of the rocket payload
mass (
=
E/mpc2 ).
The most efficient rocket is one whose motor uses all the available fuel energy to accelerate a propellant to an exhaust speed equal to the sum of all speed increments experienced by the ship since launch. (An auxiliary initial launch vehicle or a period during which the exhaust speed is greater than the ship speed is needed to avoid an infinite mass ratio.) If all speed increments are in the same direction, such a rocket leaves its exhaust stationary in the original rest frame; all the fuel energy is converted to kinetic energy of the payload. So far as the rocket is concerned, a retrofiring is the same as a forward-firing, and we may calculate the energy expenditure as the kinetic energy the ship would have in the original rest frame if all firings had been additive.
Adding
to
relativistically we find that, for
a one-way trip,
(1)and
(2)while for a round trip.
(3)and
(4)[353] Finally, we define k as the ratio of elapsed ship time to the light time:
(5)The elapsed ship time is k years per light
year traveled. Figure 1 shows the behavior of
as functions of
.
In a search for extraterrestrial intelligence by starship, the strategy would presumably be to search all likely stars out to some radius R before....

[354] ....searching any at a greater distance. Of course, if a humanly habitable planet were found that was devoid of intelligent life, a colony might be established and (much later) a new search started from there. We will ignore this serendipitous complication.
Solving equation (5) for
and
substituting in (4), we get
(6)
If a round trip to a range r must be completed
in a ship's time
, then
. But
for minimum energy we choose k as large as we can:
Then equation
(6) becomes
(7)
The number of likely stars out to several hundred light years is given very nearly by
(8)
where ro
8 light years. The total energy
needed to search all likely stars out to radius R is
(9)
Figure 2 shows
as a function
of R, taking r0 = 8 light years and
= 40 years.
In assessing the significance of figure 2, one
should realize that with all the necessary engines, auxiliary power
systems, control, communication, and guidance equipment, repair
shops, crew quarters, and life-support systems, it is inconceivable
that the payload would weigh less than 1000 tons. Thus
= 1
corresponds to at least 1023 J or 1000 years of
total energy consumption by the United States. The ordinate can thus
be labeled "millennia of U.S. energy."
Doubling
implies two generations, which
doubles the living space needed and adds nursery and educational
facilities, so that the payload is

.....increased. The longer time also increases the risk of disaffection or actual mutiny by the crew: the parents were presumably screened for psychological stability; the children are not. For R > 40 light years, the results of the search are denied to those who launched it. SETI by spaceship is a prohibitively expensive and risky undertaking. (We have said nothing about impacting interstellar debris at near-optic velocity.)
The costs drop drastically if we replace the
spaceships with automatic probes. Payloads on the order of 10 tons
seem reasonable, one-way journeys suffice, and
can easily exceed a working
lifetime. In fact, all the probes can be designed for the same
, regardless of
their range. From equations (2) and (8) the energy needed would
be
(10)
We have considerable latitude in our choice of
, so the energy
required can vary over a wide range. It might, however, be
politically impossible to get funding for results that will not
appear for centuries. If R = 100 light years and
= 1/2, the most distant returns
will take 3 centuries to come in. With these values,
With
a 1 0-ton probe this represents 13,000 years of U.S. energy rather
than 4.5 billion years. We might (optimistically) estimate the cost
of each launch at $1 billion, or $2 trillion for the entire fleet.
How do we retrieve the data from these probes? By radio, of course!
If each probe carries a 10,000 W X-band transmitter and beams the
signal at us with a 10-m antenna, its effective isotropic radiated
power will be 1010 W and at 100 light years this will produce an
intensity Of 10-27 W/m2. About 20 dishes each 100 m in diameter would be
needed to receive the signal at a low error rate (10-3) at 1 bit/sec from
each probe at or near maximum range. Since there would be 1000 probes
at 80 light years or more, we would need more than a thousand 100-m
dishes, if all are to be monitored continuously.
Thus we would require an antenna system for the probe search that is at least as large as any yet proposed for a radio search. Yet the cost of a radio search is only a small fraction of the cost of the probes. Why, then bother with the probes? Why not first listen for more powerful transmitters that may have been transmitting for centuries? This brings us to a radio search as the logical way to begin a SETI program. Because we are very uncertain of the strength of the signals we might receive, the radio search should start with modest, even with existing, antennas before a large dedicated facility is built.
But before we jump to radio, are there other signaling means we should consider? Signals across empty space must be carried by some kind of physical particle. To qualify, the particle should:
- 1. require a minimum amount of energy to exceed the natural background,
- 2. travel at, or close to, the speed of light,
- 3. not be deflected by galactic or stellar fields,
- 4. be easy to generate, detect, and beam, and
- 5. not be absorbed by the interstellar medium or by planetary atmospheres and ionospheres.
Requirement 3 excludes charged particles. Requirements 1 and 2 exclude all particles except those with zero rest mass; an electron traveling at half the speed of light has a kinetic energy a hundred billion times that of a photon at the best part of the radio spectrum. Of the zero- rest-mass particles, gravitons and neutrinos fail requirement 4. There is hardly anything [357] harder to generate than gravity waves, hardly anything harder to detect than a neutrino (except perhaps a quark!).
Only photons survive all the requirements, and so electromagnetic waves of some frequency are the only known suitable signal. Later we will use requirements 1 and 5 to find the best frequencies.
The energy required to contact other intelligent life forms by radio depends greatly on the assumptions one makes. To correspond more or less with the spaceship and probe cases, let us assume that we have built a 1000-element array of 100-m dishes and have failed to detect any signals from stars within 100 light years, and that we then elect to beam signals at the 2000 likely stars within that range. Assume that we beam the same power at each star, and that this power is sufficient to be detected at 100 light years with another 100-m dish. Then the average power required is about 100 MW. If we operate the beacons for 30 years (with occasional interruptions to listen to stars within 15 light years), the total energy consumed will be about 1O17 J. We might then listen for responses over the next 200 years, so that the total time is much like the probe case (see table 1).
|
Mode |
|
|
|
. | ||
|
Spaceships |
|
|
|
Probes |
|
|
|
Beacons |
|
|
A rational radio search does not begin by establishing beacons. It begins with the assumption that others have already been radiating for a long time, for if that is true, we can discover the signals as soon as our receiver sensitivity is high enough. We need not wait out the round-trip light time. Further, the listening program begins with existing antennas and grows to larger antennas or arrays only if earlier searches fail.
The energy cost of this listening phase is miniscule; even the hardware costs are reasonable. A significant listening program can be conducted at a cost that is small compared to other space missions.
The energy argument, which brings us to electromagnetic waves, can be continued to find the part of the spectrum where the least energy is needed for detection since we must receive at least one photon. and actually some [358] number n, to be sure we have a signal, and since the energy per photon is proportional to frequency, we should use the lowest frequency allowed by other considerations. This would also reduce the interstellar absorption. Below about 60 GHz, the cosmic background radiation enters the picture, and we now require not nhv but nkT (joules) to be sure of our signal, where k is Boltzmann's constant and T= 2.76 K. Finally, below about 1 GHz, T rises rapidly because of the synchrotron radiation of electrons in the galactic magnetic field. Figure 3 shows the sky "noise temperature" (the blackbody temperature needed to produce the observed noise) as seen from space, as a function of frequency and galactic latitude. Quantum effects are included, so that the necessary received energy per bit is everywhere proportional to the ordinate. The low-temperature (and hence low-noise/low-received-energy) valley from about 1 to 60 GHz is the so-called "free-space microwave window. "
Notice that there is nothing geocentric about this window. Radio observers anywhere in the Galaxy would see substantially the same window and would conclude that this was the best part of the spectrum for interstellar communication.

[359] There are several reasons to prefer the low end of the window and few, other than existing radio-frequency interference, to make us prefer the high end for search purposes. At the low end of the window:
- 1. Antenna surface tolerances are greater, so collecting area is cheaper;
- 2. For a given collecting area, antenna beams are broader (more sky is searched per pointing direction);
- 3. Frequency drift rate from Doppler effects and other causes is less, making the detection of monochromatic signals easier;
- 4. Power densities in transmitters and waveguides are less, allowing higher power transmitters; and
- 5. Atmospheric attenuation and noise are less, as is receiver noise.
Figure 4 shows the microwave window as deteriorated by atmospheric oxygen and water vapor. The upper end of the window is ruined on any Earth-like planet, but because of points 1 through 4 above, we do not mind going to the lower end to achieve point 5. There the atmospheric penalty is about 3-4 K. We conclude that for search purposes the optimum part of the spectrum is in the range 1-2 GHz.
By an eerie coincidence, right in the middle of this optimum region we find the spectral lines of hydrogen (1420 MHz) and hydroxyl (1.612,1.615, ....

[360] ....1.667, and 1.720 MHz). Cocconi and Morrison (1959) were the first to point out the suitability of the microwave window for interstellar contact and the significance of the hydrogen line as a signpost. In 1971, the Cyclops team pointed out the significance of OH as the other disassociation product of life-giving water, and suggested that the "waterhole" between these signposts be considered the prime spectral region where intelligent species might meet The waterhole is probably a preferred region deserving intensive, but not exclusive, attention.
To appreciate some of the constraints and tradeoffs in a radio search, we need to know the basic factors determining gain, directivity, and signal range. (Those familiar with the subject may want to skip this section.)
If a transmitter radiates a power
Pi
isotropically, then at a range R this power will be spread uniformly
over a sphere of area
. An antenna
of area A will collect a fraction A/
and thus
receive a power
If the transmitting antenna radiates only into
a solid angle
, the effective isotropic radiated power (EIRP) for
receivers in the beam will be greater than the actual transmitted
power, Pt, by a factor
;
this is the power gain g of the antenna:
(12)The effective area of an isotropic antenna can be shown to be
where
is the wavelength of the
transmitted radiation. This is the area of a circle one wavelength in
circumference. The gain of a receiving antenna is proportional to its
area and, since the gain of an isotropic antenna is unity,
(14)
where the latter equality holds if A is a circle of diameter d.
When these relations are combined with equation (11), the ratio of received-to-transmitted power can be expressed in several ways:
(15)
For single-unit antennas, the maximum area
tends to be proportional to
, so
disappears from
the bottom of the first expression; for arrays the gain tends to be
independent of
. Equation (15) recommends low frequencies for
long-distance transmission provided we have no fixed-size constraint
on our antennas.
By reciprocity, equations (12), (13), and (14) apply to both transmitting and receiving antennas. If n is the number of directions in which an antenna must be pointed to cover the sky, we see from equation (12) that (ideally)
Directivity and gain are equal; both are proportional to the antenna area measured in square wavelengths. Unless we use multiple-feed horns on a single antenna or multiple beams in a phased array and associate a receiving system with each horn or beam, we can only get broader sky coverage per pointing direction at the expense of sensitivity.
The nominal range limit is the distance R at which the received power equals the receiver noise power, N = kTb, where T is the system noise temperature and b is the resolved bandwidth. From equation (11),
(17)
where, again, the latter relation holds for circular antennas.
From a SETI standpoint, the only variables under our control are d, the antenna diameter, and N, the receiver noise in the signal channel. The effective isotropic radiated power is up to "them." To increase d is expensive; and while this may ultimately be necessary, we should first minimize N by using the lowest noise receivers, feeds, and antennas and by using optimum detection methods.
If s(t) is a signal of arbitrary shape but finite duration, one way to detect the signal in the presence of noise is to multiply the received waveform by a gating waveform g(t) that is zero (gate closed) if and only if f(t) is [362] zero, and to integrate the resulting waveform. It can be shown that the best possible signal-to-noise ratio in the output is obtained if the amplitude spectrum, G(v), of the gate is given by
(18)
where
is the conjugate of the
transmission of any preselection filter,
is a constant, S(v) is the signal
amplitude spectrum, and
is the noise
power spectrum. For our purposes, we set
= kT=
constant. Two limiting cases are of interest:
(a) No preselection, F(v) = constant. Then G(v) = constant X F(v), and therefore
The ideal gate is called a matched gate; in the white-noise case it has the shape of the signal itself and thus weights each instant of time in proportion to the expected signal amplitude.
(b) The gate is
a
-function. Then G(v) =
constant and
The ideal filter is called a "matched" filter; in the white-noise case it has a transmission proportional to the amplitude of the signal spectrum and thus weights each frequency in proportion to the expected signal amplitude there. The conjugacy aligns all frequency components to peak at t = 0 and thus produces the highest peak at that time.
For a matched gate or matched filter or any optimum combination as specified by equation (18), the signal-to-noise power ratio in the output is
(21)
where W is the signal energy.
If the signal is oscillatory, the matched gate will be oscillatory also and will serve as a homodyne detector. But if we do not know the phase of the oscillation in advance, we must use both quadrature phases and take the quadratic sum. This is the same as using a square-law detector following a matched filter and degrades the signal-to-noise ratio to at best
[363] The signal-to-noise ratio out of a square-law detector is
(23)
where P is the signal power and N, the noise power. Thus the degradation is 3 dB (2 to 1 in power) as indicated by comparing equation (22) to (21) when P/N is large. For lower input signal-to-noise ratios, the degradation is more.
Note in equations (21) and (22) that the signal-to-noise ratio depends only on the total signal energy received and not at all on how that energy is distributed in time or frequency. To design a matched filter in the first place, we need to know that distribution, and this is not possible in SETI. We can only approximate a matched filter during the search phase by making a variety of plausible assumptions as to the signal distribution in the time-frequency plane and testing each assumption.
We can expect to find two kinds of ETI signals: those broadcast or beamed for the use of the senders, which we merely intercept, and those intended for our receivers, which we will call beacons.
It seems clear that we can only hope to detect leakage signals if, like ours, they contain strong monochromatic components or carrier waves. Some argue that as civilizations advance they make their transmissions more efficient by eliminating carriers. In their next breath, these same people expect advanced races to become Kardashev type II cultures with a substantial fraction of the energy of their star at their disposal. It seems likely that many ETI signals will contain carriers for the same reason ours do: to save complexity and cost in both the transmitter and receiver. Even if these carriers are present, however, we will not detect them unless we build a large receiving array or unless their signals are much more powerful than ours. Early searches using existing radio telescopes will depend heavily for their success on the presence of beacons or very powerful sources such as orbiting solar power stations with microwave downlinks.
What characteristics might we expect of a beacon signal? We may assume that it will be as cost-effective as possible; that is, it should, with the least transmitted energy, announce its nature unmistakably to the least expensive receiver in the shortest time. Thus the following characteristics are likely:
- [364] 1. It will be located at the best part of the spectrum, in or near the waterhole.
- 2. It will not resemble natural signals such as spectral lines or pulsars.
- 3. The effective duty cycle will be high.
- 4. It will be designed to minimize the difficulty of detection. Thus, for example, it will be circularly polarized since this reduces the ambiguity: only two receivers are needed to cover the possibilities.
- 5. Any information-bearing modulation will not destroy the detectability or its distinctiveness as an artifact.
A strong case can be made for a simple monochromatic (or nearly monochromatic) signal. It is nonnatural and always there. Binary-coded modulation can be transmitted by occasional polarization reversals at regularly spaced times. Once found, it can be homodyne-detected for improved SNR. If generated monochromatically, the diurnal and annual Doppler shift will identify it as originating on a planet and give the local day and year lengths. If Doppler-corrected in our direction, the receiver bandwidth (or binwidth in a multichannel spectrum analyzer) can be made very narrow indeed, and the coherent observing time can be made correspondingly long. An upper limit on coherence time appears to be set by multipath phenomena in the interstellar medium. This effect decreases with increasing frequency and is one of the few reasons, perhaps the only reason, favoring the high end of the microwave window. At the waterhole, coherence bandwidths on the order of 1 mHz are possible out to 250 light years, which is beyond our present range limit.
Because of the finite observing time, even a monochromatic signal is received as a pulse. The matched filter is the transform of this pulse, and this is, in fact, the filter shape provided by each bin of a multichannel spectrum analyzer (MCSA). Thus the MCSA provides a matched filter for the (arbitrary) observing time used, provided the signal is at the center of a bin.
To receive a coherent beacon efficiently,
then, we are forced to divide the spectrum into bins whose width is
determined by the reciprocal of the observing time or by the Doppler
drift rate, whichever is limiting. A signal drifting at a rate
will cross a
band b in time
. The response time of the filter is ![]()
1/b. If t >
, the bandwidth is larger than
needed; if t < T, only a partial response will occur. In either
case, the SNR suffers, so for best results we choose t = T and find
that
The peak diurnal Dopper drift rate is
[365] where r is the
planet radius,
its angular velocity,
the latitude, and v the operating
frequency. For Earth, near the equator,
/v
10-10 /sec or ![]()
0.15 Hz/sec if v = 1.5 GHz. Thus
for uncorrected Earth Doppler, we would choose b
0.4 Hz. However, we have no way of knowing whether
Earth is typical. We might not expect to find r more than twice or
less than half that of Earth, but
is very uncertain. Of the planets
in our Solar System, Mercury and Venus barely rotate while Earth and
Mars both have 24-hr periods. But the Moon, an unusually large
satellite for a planet of our size, may have slowed Earth
considerably. It would not be surprising to find Earth-like planets
with periods of a few hours.
There does not appear to be any natural value for b; in fact, we may wish to provide a wide range of resolution bandwidths, perhaps from 0.001 to 1000 Hz, the wider bandwidths being reserved for pulse detection. For early searches of nearby stars, we would assume that beacons are apt to be aimed at us, and if monochromatic, that they are corrected for their Doppler drift. We can correct for ours.
To observe 100 MHz of spectrum at once with millihertz resolution requires an MCSA with 1011 channels, a formidable data processor even with very large-scale integration. One reason to consider pulses as likely beacon signals is that the detection job is made easier.
Now suppose we replace the isotropic antenna on a monochromatic omnidirectional beacon (which cannot be built anyway) with a rotating antenna having a fan beam. The beacon is still omnidirectional, but now pulses are received in each direction each time the beam sweeps by. Each pulse contains as much energy as would have been received from the original beacon in one rotation period. The pulse duration can be arbitrarily short, but the period between pulses should be short compared with a typical observing time. If this is true, the pulses will not be missed, and the effective duty cycle is not reduced.
Such a beacon would be just as detectable as a CW beacon, and the MCSA would be far less complex, because to receive millisecond pulses, for example, 105 channels would suffice. A series of regularly spaced flashes would be every bit as conspicuous as a monochromatic signal, perhaps more so. That is the way we build lighthouses. Polarization reversals between successive flashes could carry the information.
In a beamed beacon, pulses could still be used to advantage. So long as the average power is held constant, the detectability is not reduced; in fact, because of square-law detector performance, it is improved. Ultimately, peak power sets a limit to reducing the duty cycle, but by then the peak power may be 100 times the CW value and the SNR is correspondingly improved.
Although pulses introduce the new dimension of modulation type (i.e., pulse length, period, etc.), it appears possible to cover a wide range of alternatives with a properly designed MCSA. Pulses will not be so short as to be [366] smeared by dispersion in the interstellar medium. Because of their advantages, we should be prepared to detect pulses as well as steady beacons.
We cannot hope, during the search phase, to match precisely the receiver filter to the incoming signal. The best we can do is to provide a variety of filters that approximately match a wide range of putative signals. The mismatch loss will then depend on the degree of mismatch, the nature of the signal, and how we combine the outputs of various filters.
Assume first that the signal pulse has the shape
(26)
Its spectrum is then
(27)
and the pulse energy is
(28)
The matched filter has unity transmission out
to
and is zero thereafter. The noise power will be
If
the pulse is unaffected, so the peak height h = f = 1.
Thus
(30)
where
is
the mismatch factor. If
the signal
becomes
(31)
[367] which has the peak
height
Thus
(32)
where now
is
the mismatch factor. Exactly the same result holds for the case of a
rectangular pulse and a (sin x)/x spectrum. The mismatch loss (in dB)
is therefore
as shown by the lowest curve in figure 5.
If the signal is a Gaussian pulse
and the filter has the response
the same sort of analysis shows the mismatch loss to be
(36)

[368] where
or
is the mismatch factor. This gives
the next higher curve in figure 5.
These curves show the degradation in output SNR with filter mismatch. The effect on the input SNR required for a given quality of threshold detection is much less severe. Project Cyclops (Oliver and Billingham, 1973) proposed adding many spectra out of an optical MCSA with various offsets sample-to-sample to align drifting CW signals into the same final cells. An exact analysis of the detection statistics led to figure 11-14 in the Cyclops report, reproduced here as figure 6. It shows the input ratio of signal power to noise power per bin needed to give probabilities of signal detection of 0.5 and 0.99, as a function of the number of samples averaged, when the false alarm probability per cell is 10-12 . Taking the solid curve for pm = 0 5, we see that for a single sample (n = 1) the input signal-to-noise ratio must be.....

[369] .....14.34 dB, while for 100 samples we need -0.57 dB, a sensitivity improvement of 14.91 dB. But bins 1/100 the width would have given 20-d B improvement. Hence the "mismatch" loss of averaging 100 successive samples from bins 100 times too wide is 20 -14.91 = 5.09 dB as shown by the lower dashed curve in figure 5. So long as the signal energy is the same and the noises are independent, the same curves should apply whether we add successive time samples or adjacent bins. If we average 2 and 4 successive samples as well as 2 and 4 adjacent bins, the mismatch loss will be at most 0.86 dB and each MCSA output will cover a 16:1 range. Two MCSA outputs having bin widths in the ratio 32:1 will coyer a 512:1 range, while three will cover a frequency range of 16,384:1. The effective bin widths of a typical system of this sort are shown in table 2. In order to center on the times of occurrence and on the pulse spectra, the time and frequency averages should be running averages; that is, each new average should be formed by dropping the oldest sample and including the latest, or by dropping the lowest bin in the sum and including the next highest.
|
|
|
| |||
|
|
|
|
|
|
|
|
. | |||||
|
|
|
|
|
|
|
|
|
1/2 |
|
|
| |
|
|
|
|
|
| |
If, instead of adding the powers of successive or adjacent samples, one adds the complex amplitudes, a variety of new filters can be synthesized. Suppose the complex amplitudes of two successive time samples are added. Then, for a signal centered in the bin, the amplitudes will be in phase and the total amplitude will double. But for a signal at the bin edge, the amplitudes will cancel. We have in effect, an MCSA with twice the resolution but with every other bin missing. To recover the missing bins, we must also subtract the successive samples. This is similar to doing a two-point transform; in fact, if the samples are added with phase shift of ±i, this is a two-point transform.
The effective filter response for an MCSA when
the time window is rectangular (input applied at constant level for a
time
) is
[370] where
and
fO
is the center frequency of the bin. If two adjacent bins are added
with the weighting
, the equivalent filter response is
which matches a pulse having the envelope
Or if three adjacent bins are added with weightings 1/2,
1, and 1/2, we obtain
which matches a pulse of the form
As the number of bins added is increased, the matching
pulse gets narrower and narrower and the MCSA becomes more and more
insensitive to the pulse if it occurs near the ends of the observing
period. To provide outputs sensitive to pulses at n adjacent times in
the interval T, the n bins must be added with n different relative
phase shifts. This amounts to doing an inverse e-point
transform.
All these variations appear to require more additions than merely adding the bin powers as suggested earlier. However, further study might reveal tricks similar to the fast Fourier transform that would simplify the whole process.
Many surveys of the radio sky have been carried out by astronomers, but their low sensitivity, restricted frequency ranges, and limited data processing have excluded the possibility of detecting ETI signals. Let us see what kind of sensitivity and frequency coverage is achievable in a SETI sky survey.
If ts is the time allotted for a complete scan of the sky for one frequency band, then from equation (16) the observing time per direction is
(40)
The energy received during one observation from a CW beacon delivering a flux is
which is the energy that would be received by
an isotropic antenna in the search time ts. To avoid too many
false alarms, we need to set a threshold at W = mkt, where m
30. Thus the
sensitivity of an all-sky survey is
(42)
We note that, for a given search time and
threshold, the sensitivity is proportional to
because the
longer the wavelength the broader the beam in both dimensions.
Antenna area does not affect the
sensitivity. However, from equation
(40) we see that the MCSA matched bandwidth is
(43 )
The total bandwidth per scan is B = Nf, where N is the number of MCSA channels. A good figure of merit for a survey system is the product of sensitivity and bandwidth:
which depends only on the antenna size, the data processor power, and the noise temperature. Search time is a variable that can be used to trade bandwidth and sensitivity for fixed antenna and MCSA.
With our present technology, only a small fraction of the microwave window can be received at one time. Several scans of width B are needed to survey all frequencies within the window. The time required to do this depends on our choice of sensitivity as a function of frequency. Let to be the time devoted to the lowest frequency scan, centered at f0.
If we keep the time per scan constant, then
from equation (43) the sensitivity will fall off as
. The
antenna diameter must be proportional to
(area
proportional to
) to keep the
beamwidth constant. The time required for s scans covering the total
bandwidth sB is simply
If we make the time per scan proportional to
band-center frequency and make the antenna diameter proportional to
(area proportional to [372]
, the
sensitivity will be proportional to
and the time tk for the scan with
center frequency fo + kB is
(46)
The total time is
(47)
If we make the time per scan proportional to the square of the bandcenter frequency and keep the antenna diameter and sensitivity constant, then
(48)
and
(49)
Let us assume a 100-m-diameter antenna. At 1 GHz its gain is 1.45 x 106, so we have this many directions. If we allow 1 sec per direction, then t0 = 17 days, and b = 1 Hz. If B = 300 MHz, N = 3 x 108, which is a formidable MCSA, but feasible. Now B/f0 = 0.2609. The time to execute s scans for the three cases is shown in figure 7. To cover the microwave window from 1 to 10 GHz requires 30 scans and times T0 = 30 t0 = 1.4 yr, T1 = 143.5 t0 = 6.7 yr, and T2 = 839 t0 = 39 yr.
Taking m = 30, T= 10 K, and
= 0.2609 m, we
find from equation (42) that
= 5 x 10-25
W/m2. Figure 8 shows sensitivity vs frequency and the
frequency coverage of surveys made with the above system with
constant sensitivity and with sensitivity proportional to
and to
. Only the latter allows full coverage of the
terrestrial microwave window.
From figure 9 we see that, at its most sensitive, the above sky survey would detect a 1 MW beacon at 1.5 GHz beamed at us with a 64-m antenna out to a range of about 40 light years. Such beacons, if located near likely stars, would be detected by a targeted search to much greater range. Beyond 100 light years we appear to them as only one of thousands of likely stars, so it is difficult to see why any beacon farther away than this would be....

....beamed at us. At this range an omnidirectional beacon would have to radiate 5 x 1012 W (i.e., the output of 5000 nuclear plants) to be detectable by this sky survey. We can improve the sensitivity by observing for a longer time in each direction, but this will reduce the spectrum coverage proportionally.
The orthodox approach to SETI is to concentrate the search on "good" stars, that is, main-sequence F, G, and K stars beginning with the closest and progressing outward to greater and greater ranges. Most of the effort to date....

....has been aimed at detecting monochromatic or slowly drifting CW signals. In view of the foregoing discussion, pulse signal detection probably also belongs in a targeted search.
The search system for the targeted mode is identical with that for the sky survey in most respects. Both involve a large antenna, a low-noise receiver, and an MCSA. The difference is that for the targeted search the MCSA should contain a variety of bin widths from wide bins (>100 Hz) to look for pulses to very narrow bins (<0.1 Hz) to look for monochromatic signals for times much longer than allowed in a sky survey.
A targeted search does not posit any bizarre life form or any technological prowess that far transcends our own. Essentially, it assumes that intelligent life is most likely to be found on terrestrial planets circling Sun-type stars and that the powers they will devote to beacons are consistent with our technology and economic values.
The minimum detectable flux for a targeted search is simply

....where m is to be found from figure 6. If
we choose b = 1/32 Hz and integrate 32 samples, the observing time
per star is 1024 sec and, from figure 6 we will have a 50% chance of
detecting a signal if m
1.8, that is, if the threshold
corresponds to an input signal-to-noise ratio of 2.62 dB. For a 100-m
antenna, A = 2500 pi [Greek letter]. Taking T = 10 K we find
With this sensitivity we see from figure 9 that we can just detect an EIRP of 1010 W (10 kW at 1.5 GHz beamed by a 64-m antenna) at a range of 100 light years. Thus we can detect beacons of modest power around any of the 2000 good stars within that range.
The time to scan all these stars is 2 x 106 sec or 24 days. In 3 years, 45 different frequency bands could be searched. Assuming the MCSA again as 3 x 108 bins,10 MHz would be searched per scan, for a total of 450 MHz. This is enough to cover the waterhole and other cardinal frequency bands
The Cyclops concept was to start with a single 100-m antenna and add others in a phased array, thus increasing the sensitivity for each successive search. We see from figure 8 that several hundred elements would he needed [376] before leakage signals like our own UHF-TV signals became detectable. Early SETI searches using existing antennas will have a sensitivity range of about 10-26 to 10-27 W/m2 (as shown in fig. 9). These searches are heavily dependent for their success on the existence of beamed or powerful beacons.
Given present technology, or any we can foresee, the only practical way to search for intelligence outside our Solar System is to listen for (and possibly later to radiate) coherent signals in the decimeter range of the radio spectrum. Current low-noise receivers have only about 2 K noise temperature, which raises the total to perhaps 10 K in the best part of the spectrum. Present receivers are therefore only 1 dB short of perfect. Present dataprocessing technology makes multichannel spectrum analyzers with perhaps 108 bins affordable. Computer hardware costs (memory and microprocessors) are dropping rapidly. Very powerful MCSAs (109 bins) will soon cost under $1 million. It is timely to begin tests using state-of-the-art hardware and existing antennas so the proper data processors for SETI can be developed as the art matures.
- Cocconi, Giuseppe; and Morrison, Philip: Searching for Interstellar Communications. Nature, vol.184,1959, pp. 844-846.
- Oliver, Bernard M.; and Billingham, John, eds.: Project Cyclops, a Design Study of a System for Detecting Extraterrestrial Intelligent Life. NASA CR-114445, revised edition,1973.