Since the key detailed technical findings of Project Orion are summarized at the end of each chapter, the conclusions delineated below are of a more general nature. To be sure, many of these conclusions are not new, and many must await the results of further studies before they can be generally accepted. However, in view of the fact that this study constitutes the most in-depth analysis to date of the problem of detecting other planetary systems, there is reason for some confidence in the results.
A search for other planetary systems involves many diverse scientific fields. This involvement ranges from the incidental (i.e., contributions to astronomy unrelated to a search per se, but derived from observations made possible with instrumentation developed for a search) to the intentional. Knowledge of the frequency of occurrence and distribution (as a function of, say, spectral type of star) of planetary systems would provide a valuable test of our concepts of the process of star formation. Such knowledge, especially knowledge concerning which specific stars have planetary companions, would be very useful (but not essential) to any attempt to search for extraterrestrial intelligence. Finally, a search will provide us with perhaps the only means whereby we can test various hypotheses concerning the origin of the solar system.
There exists a rather broad range of observational techniques that might be used to detect other planetary systems. Each of the three techniques considered during Project Orion appears to be feasible in terms of conducting a significant search. There are other potentially useful techniques (e.g., radial velocity observations) that have not been considered here. The nature and scale of technical problems vary markedly among techniques. However, it is important to note that the attainable precision of ground-based astrometry can be improved by more than an order of magnitude.
 Even if it is decided that the more standard methods (e.g., astrometry and radial velocity) are to be emphasized as search techniques, new instrumentation must be developed and built. Such activity could range from new detectors to new telescopes, with the latter being desirable and likely.
If a search program yields positive detection, and if the proper instrumentation is used to conduct the search, then the information content of the observations could reveal useful data concerning discovered planets, such as planetary mass, temperature, orbit, and, with less certainty, planetary size and rudimentary inferences concerning planetary atmospheres.
A comprehensive search for other planetary systems is timely. This timeliness is of both a technological and philosophical nature. State-of-the-art technology, or soon to be achieved technical advances, is all that is required to undertake a search. There is a need for mankind to open new scientific vistas that offer challenge and excitement not only to those involved in exploring these vistas, but to all mankind. Certainly SETI is one such exploration, and a search for other worlds is one also. Astrophysicist Martin Rees has been quoted as saying that in a SETI endeavor "absence of evidence is not evidence of absence." However, one of the most exciting aspects of a search for other planetary systems is that once the search is done, we will have definite answers.
The choice of technique, instrumentation, and magnitude of a search effort is dictated by which of two questions one seeks to answer. If one only wishes to know whether there exist planetary systems around stars close to the Sun, and if he is not concerned with obtaining statistical information on planetary systems as a general phenomenon and all that that statistical knowledge portends for our understanding of the origin of the solar system, a rather modest effort may provide the answers. If one wishes to address the more fundamental, quantitative aspects of the problem, ultra-precise instrumentation is required, leading to a larger scale effort. Even a large-scale effort, involving telescopes in space, is likely to cost no more than a typical planetary spacecraft mission.
Our final conclusions are self-evident: more studies are needed. There are many facets to a search program-scientific, technological, and programmatic. Studies should be carried out to determine whether the atmospheric limitations to astrometric precision are  those stated here. Further studies of the type carried out by the Greenstein and Black workshops are essential. Finally, because the primary scientific question to be addressed by a search for other planetary systems is the origin of the solar system, it seems appropriate for the NASA to take a leading role in the search.
Some recommendations for future activities related to the search for other planetary systems are:
1. A search program for other planetary systems, with its own budget and funding, should be included in NASA activities.
2. Comparative studies of techniques, both ground-based and space-based, should be funded and undertaken as soon as possible.
3. University and government scientists, both in the United States and in foreign countries, should be made aware of the potential of a search program and be encouraged to participate.
4. If the studies recommended in item (2) indicate that a ground-based program can provide a statistically significant search, steps should be taken to identify a suitable observing site (if no existing facility is suitable).
5. Systems design studies should be funded to identify key technology needs or problems.
6, Any new telescope facilities constructed for a search, whether on the ground or in space, should have ample design input from astronomers who might wish to utilize them for other types of observational studies.
7. Both a direct and an indirect search technique should be employed in a comprehensive program. Although either could be used alone to carry out a search, the use of both would provide maximum search sensitivity in addition to maximum information return concerning any discovered planets.
The conclusions and recommendations listed above represent a consensus of the Directors and Advisors of Project Orion at the project's conclusion. Although most of the "Orion people" would agree  with many of the conclusions and recommendations it would be misleading to represent the statements in this chapter as a consensus of the entire Design Study Group.
The turbulence near the surface of Earth is very complex, due in part to wind shear in a boundary layer a few kilometers deep. Thus energy input to the turbulent velocity field involves a variety of distance scales. This kinetic energy cascades to smaller sizes, by a little-understood mechanism of mutual breakup of air parcels not in equilibrium with their surroundings, until viscous dissipation slows the mechanical motions enough for heat conduction to smooth the temperature inhomogeneities. Probably because of the variety of the magnitudes with which energy enters the turbulence, the dissipation rate is inhomogeneous in space. Thus, for example, Merceret (ref. 44) reported that this intermittency in the energy dissipation amounted to two orders of magnitude or more over scales ranging from 100 m to several kilometers in airplane flights at altitudes of 150 and 900 m. The intermittency is manifest in both velocity and temperature fluctuations. At l-m altitudes, it is evident at centimeter to meter dimensions in the temperature data of Lawrence et al. (ref. 45). But in balloon flights of thermometers to 15 km, Bufton et al. (ref. 46) and Bufton (refs. 47 and 48) found that vertical intermittency did not involve layers more than about 100 m deep.
An important simplification in the problem is that the decay of turbulence involving air parcels small in comparison to the altitude of the parcels closely resembles that in isotropic homogeneous turbulence.
If the steady, smooth flow of air in a wind tunnel is interrupted by a grid, the velocity field is strongly perturbed near the grid but relaxes to smooth flow far behind it. A statistical treatment of the decay of the turbulence was begun by Taylor (ref. 49) and taken up  by many others later. As noted in the review of Corrsin (ref. 50), a major effort has been a search for the shape of the correlation function relating the way in which fluctuations at two nearby points are related (or what is equivalent, the Fourier transform or "spectrum" of the correlation function). In the example of the grid, two points separated by one or more grid spacings tend to be unrelated in all fluctuating quantities such as velocity, density, and temperature. But two points much nearer than a grid spacing will see very similar fluctuations. More generally, the complete statistical problem would require knowledge of the mean strength of turbulence and, besides the two-point correlations, all higher N-point correlations as a function of space and time.
The difficulty of the mathematical problem is akin to that in quantum mechanics, as pointed out by Molyneux (ref. 51). Approximation methods are so far essential. Therefore, the approximations for fluctuations of various sizes must be spliced together, using experiment as a guide to the selection of better procedures. Most notably, near the size scale at which energy is being injected into the turbulence, the so-called "outer scale L0," a useful approximation for the correlation function, is hard to define. Generally, the turbulence departs markedly from both isotropy and homogeneity for point separations larger than about Lo. Von Karman (ref. 52) showed that, at small wave numbers (large sizes), the spectrum must begin like (kL0 )4. At wave numbers larger than the spectrum is proportional to (kL0 )-5/3, as found by Kolmogorov (refs. 53 and 54), Onsager (ref. 55), and von Weizsacker (ref. 56). Von Karman proposed the simplest possible interpolation formula for the spectrum
and reported experimental wind-tunnel data that confirmed the usefulness of this formula. Here
In the same spirit, Reiger (ref. 6) noted from the result of Corrsin (ref. 50) that the power spectrum of temperature fluctuations should begin like (kL0)2 and follow the -5/3 law in the inertial  subrange from to He proposed that the one-dimensional (radial) power spectrum might usefully be taken to be
Experiments with four thermal probes 2 m above the ground reported by Pasqualetti et al. (ref. 57) generally support the accuracy of the Reiger power spectrum. They find that L0 = 1 to 3 m in various averages, where 2 m might be expected. Convergence of the averages was so slow that averaging times of 10 to 30 min were necessary. This slow convergence should not be considered surprising in view of the known inhomogeneity of turbulence in the atmosphere. A mathematical model that assumes homogeneity may predict certain parameters of the atmosphere rather well in long-term averages, but the intermittency suggests the caution that events seemingly rare according to the formalism may be not so rare in experiment.
We shall next estimate the "standard error" in optical path lengths through the atmosphere, which will depend on the mean strength of turbulence distributed through the atmosphere. Because the strength of turbulence is itself a random variable of large range, we must guard against any tendency to discuss "two standard error confidence levels" and so on.
When light of known coherence properties enters a turbulent medium, its coherence properties are modified. Like the mathematical problem of characterizing the turbulence, the statistical problem of describing the wave field has no known exact solution. Approximation methods are again resorted to, with much criticism of the suggestions about the domain of validity of the approximations between various workers. In a review of approaches to a solution of the relevant nonlinear stochastic differential equation, Fried (ref. 58) has remarked, "If it were not so well known that light does somehow manage to propagate in the atmosphere, we might very well have heard questions raised concerning the existence of a solution!"
The intended use of the present study is primarily a feasibility study for interferometric measures through the atmosphere. In the Michelson stellar interferometer, for example, light from two finite areas is combined so as to study the correlation of the electric fields  arriving at the two apertures. The apertures are finite so each aperture averages over the small details of the electric field distribution that might be present. If we could follow the path of the incident light beams through the atmosphere to the final defining apertures, we could, in principle, calculate what details of the index of refraction transverse to the beams are thereby averaged out. This problem has been treated by Cook (ref. 59) by a method that gives a correct wave-optical treatment for the centroid of a finite-width beam; the treatment is valid over long propagation paths. He uses Ehrenfest's theorem to transfer a result established in quantum theory to the macroscopic wave-optical domain: the average position of a particle is calculable from Newton's second law of motion when the force is taken to be the average value of the negative gradient of the potential. The analogous centroid of the beam is the first moment of the intensity. He shows that this beam centroid is governed by the differential equation for a paraxial ray in a weakly inhomogeneous medium, provided that the true refractive index over the beam is replaced by a smoothed version, where the smoothing function is the intensity profile of the beam.
At a site of low natural turbulence and at times of especially low turbulence in the atmosphere, which would be selected for experimental purposes, the motion of the beam centroid can be estimated from the formalism of Cook. The same formalism gives an estimate of the variance of the beam angle of arrival. Without going into details about assumptions concerning the turbulence distribution until later, the turbulence that gives angles of arrival mostly from a l.O-arcsec range of directions also causes the beam centroid to wander by ±0.5 cm. Since the effective center of the beam then moves an order of magnitude less than the size of each aperture of interest in a Michelson stellar interferometer, we shall be well justified in using a ray-optics formalism. Furthermore, because the inner scale in the first 10 km of the atmosphere is small relative to the interferometer apertures, we need not concern ourselves with modifications of the power spectrum due to viscous effects.
The above assumptions are just those necessary for the validity of the calculations of Hufnagel and Stanley (ref. 60, here called HS), whose notation we shall follow. Introduce the vector notation:
 The wave equation for the propagation of a scalar V through a loss-less nonhomogeneous medium is
where is the three-dimensional Laplacian operator, c is the speed of light in a vacuum, and is the local index of refraction. The normalized fluctuating part of n is defined by
where is the local time average of n. Without noticeable loss of accuracy, we take . If N is everywhere zero, a solution of equation (A3) corresponding to a plane wave propagating in the
positive z direction is
where is the wavelength of light, w = kc, and A would be a constant. But in the presence of nonzero , A is not constant but responds to the index-of-refraction variations according to
We shall be interested. however. in the average coherence function
according to HS. Here h is the altitude of the tropopause for a vertical integration path. indeed, our central interest is in the difference in two optical paths separated by B:
 The mean value for L2 vanishes as an ensemble average, which we shall take to be a long-term average. However, the variance in L2 is calculated from
At this point, we introduce the covariance function for the random field N:
and the distances z = z2 - z1, , whence
where u2 = z2 + B2. We have indicated that the covariance parameter a will generally be a slow function of altitude. On the other hand, the covariance function C vanishes rapidly when the distance between two points is larger than the outer scale L0, which is always much smaller than the tropopause height h. We are then led to introduce normalized functions F that are line integrals over the covariance function at a "miss distance" x:
where b is chosen so that F(0) = 1. With no substantial error, we may approximate equation (A11) as follows:
 We know from the flights of Bufton that the z (altitude) dependence of a2(z) is usually approximately exponential with a scale height of about 4 km. The length is roughly the outer scale L0, which is known from many experiments, including that of Pasqualetti et al. (ref. 57), to vanish linearly at ground level. There seems to be no other experimental evidence on the altitude dependence of the outer scale except that it is not likely that the outer scale could exceed the layer thicknesses found by Bufton. A suitable model for the outer scale for the present purpose is a linear increase L0 (z) up to a saturation value that holds for the remainder of the atmosphere above altitude h1. The function F(x) can be evaluated and stored. If the integral equation (A13) is done in units of L'0, the saturation value of L0, then F(B) can be evaluated by the scaling relaxation x = BL'0/L0(Z). On the other hand, we shall argue below that L'0 is on the order of 100 m, less than 1 percent of h, so it is not likely that any great error can be made by taking the outer scale constant all the way to the ground. In that case, b[l - F(B)] are not functions of altitude and the remaining integral over altitude gives
where H is the exponential scale height for a2(z)
The fact that the two-aperture optical path difference L2 has a variance that is roughly 2L'0. 2H . a2(0) at large B, where F must vanish, suggests a simple interpretation. The effective path length for two paths is 2H at a turbulence strength of a2(0). if we had to estimate the variance in L2 from 2H/ 2L'0 regions that are independent of each other, each of size 2L'0 , we would calculate
which is just the present result.
Suppose we add the idea that not all of the independent regions are at the same stage of turbulent decay; then we might even expect that the turbulence strength a2 in different regions will appear to be variable. Lin (ref. 61) pointed out evidence from wind-tunnel data which called for a dissipation rate that was an explicit function of time after the energy injection. The unpleasant aspect of his theory for the two- and three-point correlation functions is that, at the  beginning of the decay process, the functions are self-preserving in shape only at small r/LO, whereas late in the decay process the whole curve is fixed in shape. This means that the thermal spectrum is apt to deviate from equation (A2) in a manner that depends explicitly on the current value of a2 observed in the same region. A tendency of this sort is clearly visible in the airplane flight data of Merceret (ref. 44). in Merceret's figure 6 (ref. 44), for example, there is a clear tendency for high-turbulence power spectra to follow the k-573 law. But the low-turbulence spectra follow it only at high frequencies. At low frequencies, the deviations from the k-5/3 law are in the sense of the interpolation formula (A2). in terms of the concepts suggested by Lin (ref. 61) and von Karman (ref. 52), these data suggest that the higher turbulence regions be thought of as longer developed regions so that both the accuracy of the -5/3 law has improved and the total energy transferred into the high wave number by action of Reynolds stresses has had time to increase.
For present purposes, we do not need a highly accurate shape for the two-point correlation function C(r) in equation (A 12) because of the line integral with respect to altitude. We shall use equation (A2) for the power spectrum of the temperature fluctuations and try to choose L0 from experiment so as to average the corresponding correlation functions C(r/L0) along the line of sight over all the states of turbulence development.
The three-dimensional power spectrum corresponding G1(k) is (ref. 6):
from which (ref. 62)
Using C(-r) = C(r) to drop the imaginary part of the exponential and introducing spherical coordinates in k-space so that
we can perform the integral immediately. The polar angle integration requires use of the identity:
where the spherical Bessel functions jm(z) are regular at z = 0 and are Legendre polynomials. Fortunately, the integral over vanishes for all but m = 0:
Thus we find
Because k-1G1(k) vanishes at k = 0 and , an integration by parts and a substitution gives
where The constant C1 is fixed by our desire to have C(0) = 1, so that
 From tables of cosine transforms (ref. 63)
is defined in terms of the modified Bessel function:
A useful series for small is
At large, an asymptotic expansion for is
 where, in the sum to M terms, it is convenient to take terms by pairs and stop when the contribution is either smaller than some desired precision or large enough to make increase. In a computer carrying 16 binary bits, it was found that a crossover point between large and small at = 2.883400 gave results with minimum discontinuity at the crossover. The function is 0.34 at the crossover point and decreases exponentially with .
When is given by equation (A22), the line integral giving b in equation (A12) can be obtained analytically as a K transform (ref. 63):
The function F(x) was obtained numerically, using equation (A26) to verify the accuracy of the procedure. A file of values ranging over six orders of magnitude was stored for regularly spaced arguments, spaced by Then the function F(x) at arbitrary argument could be obtained rapidly by interpolation.
When the baseline B is very much smaller than the outer scale L0, we can use equation (A24) to find the limiting behavior of F(x) as goes to zero. If u2 = z2 + x2,
Even at B = L0/5O, the slope of log[1 - F(x)] versus log x was not 5/3, but 1.40. A detailed study at small x was not carried out, but the interpolation formula for F(x) was taken to be of the form indicated by equation (A27), with the coefficient 1.392 replaced by 1.513 to match the function at L0/5O.
 The result of the procedure gives the rms values for L2 calculated from equation (A13) shown in figure 7. We note that the average L2 saturates at a baseline equal to the outer scale. At small baseline, the variance L22 exhibits a power-law dependence on baseline that gradually steepens to the 5/3 slope arranged by the interpolation procedures discussed above. The inclusion of a region of decreased outer scale next to the ground smooths out the L2 curve so that no sharp break in slope is evident at B = L0/5O. The L2 values assume that the turbulence strength has an exponential scale of 3.75 km until an altitude of 16 km, where turbulence is taken to vanish. At the ground, we take a2(0) = 4x10-16 m-2/3 from the data of Bufton et al. (ref. 46), which correspond to quiet conditions at dawn. The wavelength = 0.5µm is chosen for example only.
The experimental results of Bouricius and Clifford (ref. 64) confirm departures from the 5/3 law of equation (A27). They studied the variance in phase measures in a horizontal light beam experiment about 2 m above ground. They found a slope near 1.5 and looked carefully for possible reasons for a discrepancy with the expected 5/3 slope. It seems not to have been considered that the effects of proximity to the saturation at B = L0 are evident to point separations of 10-2 L0 . The result is confirmed in the calculations of Cook (ref. 59), who studied the variation in angle of arrival of a finite beam. His result is important for a conclusion from the vertical placement of the curves in figure 7.
According to Cook, the variance of angle of arrival for a collimated beam of size T (which he takes to have Gaussian intensity distribution) is of the form
where the dimensionless variable x2 = (T2 + 4Li2 )/L02 can be taken to be x = T/L0 in the present context since the inner scale Li is less than centimeter size near the ground. The constant C2 is 1.303. The combination za2 is to be interpreted as the integral over turbulence strength as in equation (A13). The function f(x) is very closely related to [1- F(B)] /B2, which, according to equation (A27), is proportional to B-1/3 at small baseline B. Clearly, L2/B has the significance of a wave-front tilt like . Cook gives 
from which the first term and the assumptions of figure 7 give = 0.536 arcsec at T = 1 m. But Fried (ref. 65) has pointed out that when imaging through turbulence described by the 5/3 law of equation (A27), an overwhelming part of the deformation of the wave front is simple tilting, especially for long-term exposures. Thus we may say that the assumptions underlying figure 7 imply that a star image at the zenith would be about 2, or about 2.2 arcsec in angular diameter. The dependence of this result on telescope size T is only an inverse 1/6 power, which is too slow to notice as an explicit aperture dependence on the seeing disk size. The dependence on outer scale drops out completely, unless the second and later terms in equation (A29) are substantial. For example, [x1/3f(x)/3.51]1/2 = 0.88 for T = 0.01 L0, which lowers 2 to 1.0 arcsec.
The use of direct imaging to study the product za2 under known micrometeorological conditions has been used, for example, by Wesely and Derzko (ref. 66). They find good agreement between limiting resolution obtained visually through 8- to 15-cm telescopes and measured za2 products inserted into a formula comparable to equation (A28). The theory is due to Fried (ref. 67), who introduced the length D through the relation
and showed that the limiting resolution of a system not able to respond rapidly to the wave-front tilts could be taken to be equal to the diffraction-limited resolution of a telescope having diameter D. This was confirmed by Wesely and Derzko, who measured the Rayleigh criterion angle
Actually, various coefficients close to 3.34 could be adopted because of the visual determination of the limiting resolution. It is possible, however, that finite outer scale effects were detected because, as we saw above, tends to decrease if T/L0 is significant. A major part of  the data was taken with T = 15 cm and L0 1 m, for which might be decreased by a factor of 0.66. The optically inferred za2 product was either 0 77 or 0.82 times the value calculated from temperature and water vapor fluctuations, depending on one's assumption about the correlation between temperature and water vapor. It is also inter-esting to note from equation (A31) that wavelength dependence of the seeing spot size is almost absent, in agreement with the absence of color effects in the speckle pattern of a large telescope.
In a paper of crucial interest to the interpretation of figure 7, Breckenridge (ref. 68) reported the fringe motion in a wave-front folding interferometer at conditions of known seeing. He reported the total range of fringe motion which was exceeded no more than 10 percent of the time when the seeing disk was 3 arcsec across.
Assuming Gaussian statistics are accurate enough for the present pur-pose, we converted this total range to a standard error by dividing by 3 and plotted his results in figure 7. Breckenridge made the point that the placement of his points was consistent with equation (A30), that is, L22 was proportional to the 5/3 power of B. The points are rather better fit by a curve for which 5/3 is replaced by 1.5, as in the results of Bouricius and Clifford (ref. 64) discussed above. At B = D, we find for whatever slope is appropriate in equa-tion (A30). From figure 7 we read that D = 11.5 cm, in perfect agree-ment with the maximum distance in the pupil for which Breckenridge could discern fringes in a long-term photograph of the fringes. But for this value of D, the Rayleigh criterion angle is 1.32 arcsec, in sharp disagreement with the 3-arcsec seeing disk.
The issue posed by this disagreement is that, if seeing were at the 3-arcsec level as he asserted, then the outer scale in most of the troposphere is between 50 and 100 m; that is, the Breckenridge points should be shifted down by a factor of 3 to be on the same basis as the theoretical curves. If seeing somehow were I arcsec after all, his data require an outer scale nearer to 10 m.
It is easy to argue that the discrepancy in "resolution angle" is to be expected. According to relation (A2) and for either choice of the outer scale, optical effects grow stronger for heat waves of increasing dimension for baselines in the l-m range. Larger waves are carried past the telescope in longer times. According to the results of Ken Knight (ref. 69), it is unusual that the fringes at more than 0.5-m separation could even be followed visually. Of course, it is to be  expected that the fringes pause briefly at the extremes of their motion. Thus the visual interferometer data heavily weight the extreme fluctuations due to low wave numbers.
On the other hand, the outer extremes of the seeing disk arise from diffraction from the highest wave-number heat waves in front of the telescope. But from figure 7 we saw that L2/B is proportional to B-1/6, which diverges at small B because we have ignored the effects of the inner scale.
More fundamentally, the presence of this divergence is due to the fact that, when imaging with a continuous ("filled") aperture, ray separations of arbitrarily small amount are of interest. In the passage from discrete-ray separations to continuous-ray separations, we are led to consider not only the correlation of index-of-refraction fluctuations, as in equation (A10), but the correlation of gradients in the index of refraction (ref. 70):
Taking advantage of the average isotropy of the medium,
we see that the correlation of wedge effects depends on the distance along the beam z through the transverse correlation function Q(r), where
If we try to obtain Q from the spectral representation (A2) as in (A15) to (A20), we fail because the integral in equation (A20) does not converge at large wave numbers when, in obtaining C'(r), we replace cos(ru) by -u sin(ru) in the integrand. Various mathematical devices have been used to effect convergence. For example, Cook (ref. 59) used the modification of equation (A2) associated with the  name of von Karman, which was a simple multiplication of equation (A2) by , . This ruse has no physical justification. Corrsin (ref. 50) pointed out that the thermal fluctuation spectrum must steepen from the u-5/3 relation in the inertial subrange to u-7 in the viscous region. In the spirit of Reiger (ref. 6), an interpolation formula of simplest form is
The point for the present is that direct imaging tends to weight heavily the smaller heat waves in front of the telescope. Because very many of them project their phase perturbations on the entrance pupil, we realize that we deal with the statistics of very large numbers when we measure the extent of the seeing disk. Therefore, the size of the seeing disk is a relatively robust measure of the turbulence along the line of sight. In contrast, the centroid of a star image, like the phase difference in an interferometer at a separation B approaching the telescope size T, is mainly perturbed by a few large heat waves in the line of sight. The heat-wave size may far exceed T, so it will convect past for a long time.
We therefore accept the conclusion that figure 7 reveals an outer scale in most of the troposphere of 50 to 100 m. The important corollary conclusion is that the rms difference in two vertical optical paths will not exceed about 2 µm during quiet atmospheric conditions, no matter how far the vertical paths are separated. At zenith angle, the path difference will be less than 2 sec1/2 µm for modest zenith angles. Note, of course, that the prediction ignores density fluctuations associated with mechanical motions such as mountain waves, weather fronts, and convection cells larger than 100 m.
In concept, the imaging stellar interferometer (ISI) blends the advantages of focal plane astrometry with the strengths of long-baseline interferometry. In this section we will develop the rigorous transformations and show their application in a now standard statistical reduction technique. We will present here only the rudiments of the proposed algorithm.
As a region transits one of the two instrumental vertical circles, the focal plane detectors of the respective ISI determine a series of relative azimuths for the target star and approximately 20 selected reference stars. To lessen the effects of atmospheric turbulence and scaling, each detector integrates its positional and intensity measurements over exactly the same increment 6f time. Thus, a list of relative positions is produced for each object over a series of points in time. The rotation of Earth, its orbital motion, and the space motions of the Sun and stars will make each set of observations unique. However, the equations of motion are known, and rigorous transformations are available to bring all observations into a common reference frame.
Two sets of constraints must be satisfied simultaneously by the statistical reduction of the measured reference star positions. The first models the projected position of these stars to the best existing set of predicted coordinates. The second constrains the apparent motion of these objects to either a linear or a Keplerian trajectory. With rare exception, known binaries will be avoided as reference stars but, as pointed out elsewhere, at high astrometric precision new dis-coveries will be common. The reduction process is iterative, with each set of measurements being reduced to the positions predicted by the models and parameters that best satisfied all other observa-tions on the previous pass. The algorithm is essentially that of the central overlap technique (ref. 3). However, the geometry is different.
 Figure 62 illustrates the celestial sphere of the observer. Both Earth and the observer are assumed to be of insignificant size and to be placed at the exact center of the sphere. As Earth rotates around line SCP-NCP, a star with a declination will approach the first vertical circle (FV) at an angle :
where is in radians, is the latitude of the site, and FA is the azimuth (east of south) of the first vertical circle. A similar equation.....
 ....exists for the second vertical circle. The dependence of on causes the field of an ISI to appear to rotate slightly during a transit. But this effect is quite small for low latitudes and a reasonable range of zenith distances and, if we use short periods of simultaneous integration, could be ignored even near the pole. Table 11 lists over a 70° arc. Near maximum sky coverage for a site within the United States is achieved by the choice of and FA utilized there.
The central overlap technique determines the initial scale and orientation of a field through stellar positions and proper motions obtained from catalogs and wide-angle astrographic plates. These positions, corrected for epoch and equinox, may be placed into the altitude (a)-azimuth (A) system of the ISI via the rigorous transformations:
where a and A are to be converted from radians, and are the right ascension and declination of the object, and T is sidereal time. If the denominator equation (B2b) is positive, we add 180° to A; if it is negative and the numerator is positive, we add 360°. Equations (B2) are valid for either ISI.
Each ISI images a region onto its focal plane where the relative positions are measured. This plane is similar to that which contains the photographic plate or multichannel astrometric photometer (MAP) of a more standard astrometric instrument; the coordinates of the positions of objects on the celestial sphere imaged onto the focal plane are similar to the standard coordinates used in that connection.
 Where (A0 ,a0 ) are the azimuth and altitude of the point at which the optical axis of the respective ISI intersects the celestial sphere, we have the rigorous transformations:
where the axis is parallel to the horizon, increasing with decreasing north azimuth, and the axis contains the projection of either FV or SV. The tangential coordinates (,) are given here in radians. Because the Orion system does not track the field under observation, the equatorial coordinates of the tangent point change continuously. Thus the tangential coordinates of each star must be recomputed for each set of observations, approximately 300 times during one observing session. However, the number of computations necessary for 20 or so stars in these and all other involved transformations will hardly challenge the speed of a modern minicomputer.
The predicted instantaneous tangential coordinates now form a lattice for the adjustment of those obtained with the respective ISI during a given observational increment. The effects to be modeled and removed via this adjustment are more numerous than is immediately obvious. By this reduction we will place a set of measured coordinates into a reference frame defined by a subset of the stars which comprises a standard catalog. The zero point and orientation of our reduced positions will be essentially that of the catalog system at the equator and equinox to which we have precessed these positions, The epoch will be that of the observation. The effects of nutation will be removed by a field translation as will those of general refraction. The first-order effects of stellar aberration will be removed by scaling the field to match the predicted positions as will those of differential refraction. Errors of mechanical alignment on the instrumental vertical circle and pivot errors as well as errors in assumed longitude and variations in geographic latitude will be  removed by an affine transformation. Nongnomonic projection characteristics of the system will require higher order terms in each coordinate as will variations in differential refraction across the field. Probably the major source of noise in the measured positions will be the rapidly changing refractive characteristics of Earth's atmosphere (ref. 28). Unmodeled, these variations will cause apparent displacements, scale changes, and distortions which combine to introduce positional errors thousands of times larger than the theoretical precision of a stellar interferometer.
If we express the effects of each of the above phenomena on the stars within the region defined by the field of an ISI in a set of Taylor series expansions, truncate those expansions, and then add them together, we may write:
where x is the measurement provided by the detectors of the relevant ISI with x = 0 as the detectors cross the projected vertical circle near the optical axis of the instrument The quantities and are predicted by equation (B3), and the parameters b through g are to be determined via least squares.
There is some uncertainty about the point of truncation in the relevant Taylor series. It is quite possible that the misbehavior of the atmosphere will require a general third-order expression and hence 10 terms in equation (B4). Special terms may be necessary to model the nongnomonic imaging caused by those optical surfaces that the design has not been able to place in the optical pupil of the system. Magnitude or color-related displacements, magnifications, or distortions may be present. To ensure reasonable stability in the statistical reductions, each new term added to equation (B4) should be accompanied by the measurement of approximately three new reference stars. As with any precision instrument, a rash of such unexpected terms could pose serious questions about the system as a whole.
As a region crosses the field of an ISI, the imaged stars appear to move along a set of gentle arcs which are the projections of small circles onto a tangent plane, Thus, the x positions recorded for a star during its transit move rapidly across the field and that motion suffers a spurious acceleration. However, we may write equation (B4) in the form :
where b0, c0, and d0 are initial estimates of the rotation and translation parameters and b', c', and d' are corrections to these estimates to be calculated by statistical adjustment. The constants e, f, and g will be of the order of b', c', and d' and can be estimated directly with no loss of generality or precision. The right-hand side of equation (BS), unlike that of (B4), shows essentially only the trends we seek to explain. Furthermore, these trends can now be directly compared with the parameters of time and symmetry with the optical axis.
The first of these, time, will allow the investigator to model the passage and variation of seeing phenomena from one second to the next. Experience will probably suggest certain constraints that can be applied to these wavelike phenomena. The motion of the region across the field should allow the observer to detect symmetrical imaging errors of the system by employing the concept of overlap condition (ref. 71). This concept will also allow stars spread over an area somewhat larger than the field of the ISI to be utilized as reference points.
The advantages of this type of relative adjustment have long been recognized in astrometry. Conditional equations similar to equation (B5) with less than a dozen parameters adequately model the various effects that all phenomena discussed previously can exert on the target star's position in a reference frame defined by those stars that lie nearly in the same direction. Similar reductions into an absolute system of coordinates would require scores of parameters with their associated parameter variance. To gain some of the precision of focal plane astrometry, instruments that measure the spherical coordinates of one star at a time sometimes follow a procedure of alternately observing the target star and a few reference stars. However, as pointed out by Schlesinger (ref. 72) and Hudson (ref. 73), and as evidenced by the work of KenKnight (ref. 28), simultaneous observations are absolutely necessary. Furthermore, at the precisions sought, one must remove at least the effects of first- and second-order differential refraction. Such a reduction includes the determination of at least 6 coefficients and the simultaneous observation of perhaps 18 or 20 reference stars.
 There is one additional form of astrometric data provided by the detectors. Each unit determines the position of a star in four different bandpasses. The refractive indices of these four colors have a range of approximately 1/50 their mean magnitude. Thus a trend in position with wavelength should be measurable. This trend allows an estimate of the unrefracted position of the star. However, the trend, including both the accidental and the systematic errors of the observation, must be multiplied by 50 to estimate the correction necessary to remove the effects of instantaneous refraction. Fortunately, this information is already available at relatively high weight in the simultaneously observed positions of numerous reference stars.
Once a satisfactory reduction of the measured coordinates has been effected, we turn to the second major phase of an iteration, an analysis of the parameters of motion of each star. As currently applied at this point in the algorithm, the central overlap technique transforms the observations of each star into a different planar coordinate frame. Tangent to the celestial sphere near the center of the star's great circle trajectory with projected displacements recorded in standard coordinates, this reference frame is unusually free of spurious accelerations and is adequate for several decades of analysis without correction. We will incorporate one additional rotation in this plane into our transformation so that each star's motion is modeled first in the coordinates obtained with one ISI and then in terms of those obtained with the other ISI.
The four sets of rigorous transformations used to place the data in the desired coordinate frames begin by converting the observed tangential azimuths to observed altitudes and azimuths:
and rigorous formulas:
where and are in radians. If the denominator of equation (B7a) is negative, we add 180° to . If the denominator is positive and the numerator is negative, we add 360°. The observed right ascensions and declinations derived from equations (B7) are in the epoch of the observations and the equator of the reference catalog. To avoid unnecessarily large allowances for the rotation and translation that precession will produce in each region, the reference catalog's equinox should be within the domain of the observational epochs.
The third step is the rigorous transformation of the reduced observations into the standard coordinate frame:
where and should be multiplied by 206,264.806247 for conversion to arcseconds and is the tangent point that is adjusted on each pass to lie as near the center of the stars' apparent trajectory as possible. The axis contains the projection of the great circle passing through the tangent point and the celestial poles; the axis is perpendicular to theaxis at the tangent point, increasing with increasing right ascension.
The fourth and final transformation brings the axes of analysis in line with the original axes of observation. For the first ISI, we have:
 and, for the second ISI, we have:
where F and S denote the derivation of respective sets of and , and in each case is derived from equation (B1) where = 90° - .
We now have the reduced x measurements of the two ISl's within the same coordinate system. However, we will not make use of this fact until the final stages of the current iteration. Instead we analyze the apparent star plotted against the F axis, which can be modeled:
with a similar expression for the s axis, where t is the time since the star passed the tangent point, µF is the proper motion in the F coordinate, [Greek letter] pi is the trigonometric parallax, and µ'F is the annual rate of change of µF (ref. 74). The factor fF is derived from the usual parallax factors:
with a similar expression for fs.
For purposes of this discussion, we will ignore a number of possible statistical refinements that present themselves at this point. The axes of observation are only perpendicular at one point observable by the Orion system and there are terms in the above and following equations of condition that would benefit from a unified reduction of all parameters of motion. But we will leave such things to those who may follow.
 When the residuals of equation (B10) show an apparently nonrandom variation, two new terms may be added to the equation, namely:
for the coordinate. The terms xF, yF, xs, and ys are elliptic rectangular coordinates rotated into the respective coordinate systems, while 1, J, K, and L are similar to the usual Thiel-Innes constants (see ref. 5)
The ecliptic coordinates are computed from the dynamical elements of the orbit: the period, time of periastron passage, and eccentricity. These are first estimated from the general form of the residuals to equation (B10) and then improved by a series of iterative adjustments to (B12) until the lowest sum of the squares of the residuals is attained (ref. 75). Equations(B10) and (B12) can take on more complicated forms for multiple-body systems, but these will be relatively infrequent.
An additional form of these equations is useful in attaching a statistical significance to any proposed perturbation:
with a similar expression in S. The term QF is computed using all that is known about the proposed orbit, while , scaled to itself, should be identically 1 on each coordinate. Its variation from this value on the two axes gives a check on the agreement between the two ISl's while the ratio to its formal standard error gives an immediate indication of the statistical significance of the observed variations from the linear motion defined by equation (B10).
Presumably, such a test would be automatically run for each target star, allowing the investigator to attach a confidence level to  all nondetections as well as to suspected detections. The formal error computed for in this manner will prove invaluable as a statistical tool for estimating the frequency of planetary systems. As stated elsewhere, the near-circular orbit of a star around the center of gravity it shares with its major planet will present a deflection of the star's trajectory that is evident in any orientation of the orbit. Detection depends only on the angular size of the orbit and the precision of the astrometric observations.
The user of the Orion system suffers some disadvantage because his astrometric precision is a function of declination and position angle. In more than two-thirds of the regions accessible to the system, the F and S axes are not even nearly perpendicular, but intersect at an angle of approximately 60°. As a result, the instrument has less than 60 percent of the sensitivity to orbits oriented in the north-south direction as to orbits oriented east-west. This, in turn, substantially affects the certainty with which the investigator will be able to say that a certain star does not have planets within given ranges of period and above a particular mass. In other words, the minimum detectable mass is not as small as the measuring precision suggests.
Once the form of equations (B10) and (B12) is determined and coefficients calculated in each coordinate, the output of the two ISl's may be finally combined to form improved predicted positions for the stars observed. Where the angles defined for the two ISl's areand we have:
with F and S as defined previously. Hence, equations (B14) contain time-dependent functions.
Precession does present some difficulties. The slow change in the apparent intersection of the projected axis of Earth's rotation and the celestial sphere causes a variation in the angle at which a star  approaches the two vertical circles. This, in turn, changes the orientation of the two axes of measurement with respect to the trajectory of the star. Thus the one-dimensional observations are obtained in a rotating reference frame. However, the period of precession is long (about 26,000 years) in comparison to that of a proposed observational series. Thus, to sufficient precision, the rotation terms of equations (B4) and (B5) allow the necessary compensation. The angles and here become the initially estimated angles derived from equation (B1), with the coefficients of equations (B4) and (B5) providing the alignment necessary to counter the effects of small amounts of variability. The iterative nature of the central overlap technique ensures that the estimated y positions () used for these small compensations in the latter equations are of acceptable precision.
The standard coordinates derived from equations (B14) are expressed in the equatorial coordinate system using the rigorous transformations:
where and are in radians.
We have now completed the first full iteration. The values predicted by equations (B15) will have a much higher internal precision than those from the star catalog or astrographic photographs. But the new positions will retain the scale, orientation, mean position, and mean motion of the catalog positions. They will also retain the errors in these initial quantities, but these will have little effect on the information as seen. Many potential second- or higher=order spurious effects in the catalog positions will be smoothed out or completely eliminated by the overlap technique.
With careful attention to the detail of equations (B4), (B10), and (B 12), we will note a drop in the formal error of their parameters during each of the first few iterations through equations (B2) to (B15). After three or four iterations, no parameter will change its value from one iteration to the next by more than 10 percent of its formal error. At this point, the iterative version of the central overlap technique is considered to have converged.
Suppose we wish to simulate the Sonine distribution whose intensity transmission is
The amplitude distribution for this case is
and the equivalent one-dimensional distribution is
Ignoring constant multipliers, we have for the mask shape:
Figure 63 shows the masks for µ = 0,1, 2, 3, 4, and 5.
These masks will, of course, have only "good" directions parallel to the x axis. However, in these directions the performance should be as good as obtainable with a variable density mask in all.....
 .....directions. This should allow an assessment of the light scattered by figuring errors and other imperfections.
If planetary systems are common in our galaxy, direct photography of these extrasolar planets would provide convincing proof of their existence. However, detecting the images of extrasolar planets presents a major technical problem: the extremely faint planetary image must be detected in the presence of background light, diffracted and scattered from the much brighter image of the planet's star. For example, if we were trying to detect Jupiter from a distance of 10 pc, Jupiter would be about 3 x 108 times fainter than the Sun and their separation would be at most 0.5 arcsec. Considerable reduction of the stellar background light is achieved with an occulting edge, used in conjunction with a telescope in space, as described by Spitzer (ref. 76). A practical realization of the occulting edge required by Spitzer's scheme would be a "black limb" of the Moon (i.e., a limb not illuminated by the Sun or Earth). Furthermore, the use of the lunar limb alleviates the critical alignment problems that would be encountered for a man-made occulting edge, necessarily of smaller size. The proposed 2.4-m space-located telescope (ref. 77) and its area photometer should be capable of detecting the Jupiter-Sun system at a distance of 10 pc. For this hypothetical case, the detection time would be short, requiring less than 20 min to detect Jupiter.
The relative intensity of the planetary image and background starlight in the focal plane of the telescope will be determined by the  starlight attenuation that can be produced by the occulting edge, in addition to the attenuation produced by the telescope itself. The situation is illustrated in figure 64, with the planet and its star at the left. The space telescope views the planet at an angle above the horizon provided by the black lunar limb, while the star is at an angle below this horizon. The diameter of the telescope aperture is d and it lies at a distance from the Moon. Since the telescope is within the geometrical shadow of the lunar limb, the intensity of starlight entering the telescope is greatly reduced from its unocculted value.
Here we shall consider the telescope to be an ideal optical system and neglect the effects of scattered light. The attenuation factor is defined as the amount of background starlight within the Airy disk containing the planet image divided by the peak intensity of the stellar image that would be obtained without an occulting disk. Modifying the equations given by Spitzer (ref. 76) to include the angles and as independent variables, we can write the attenuation factor as....
 ....where 0.23 is the average intensity within the first dark ring of a diffraction pattern whose peak intensity is 1.0. Although the planet is at an angle + from the star, the separation will appear to be only at the telescope because the light received from the star has been diffracted around the limb. In equation (D1), , d, and should be expressed in the same units, and and are in radians. We see that the factor has a strong dependence on the wavelength used, the telescope aperture, and the angular separation of the planet and its star.
For the Jupiter detection example, we let = 5000 Å, = 4 x 105 km, a = 2.4 m, and = = 0.25 arcsec =1.25 x 10-6 rad. Then = 2.5 x 10-9, consisting of a 2.2 x 10-5 attenuation from the occulting edge and a 1.13 x 10-4 attenuation by the telescope itself. Since the Moon is dark and rough, specular reflections should not be a problem and the attenuation factor for the occulting edge should be a realistic estimate. However, real telescope optics would not be strictly diffraction-limited (as was assumed to write eq. (D1)) and scattered light within the telescope would raise the background light level. Apodization could be used to increase the attenuation, and here we shall assume that an attenuation of 10-4 could be achieved by the telescope.
Knowing the attenuation factor , we can now calculate the signal-to-noise ratio for detecting Jupiter at a distance of 10 pc. The photon-counting area photometer described by Laurance (ref. 78) would be used at the focus of the telescope. We let fp, f*, and fs equal the fluxes from the planet, star, and the sky, respectively, in photons within the Airy disk containing the planet's image. The quantity fn is the detector noise, expressed as an equivalent photon flux. We assume that the photon counts recorded by the detector obey Poisson statistics. The signal-to-noise ratio (S/N) is given by
The quantity q is the photon detection efficiency of the telescope optics, filter, and detector system; is the area of the telescope; A) is the wavelength passband of the filter; and T is integration time.
 From equation (D2), we can compute S/N for detecting Jupiter at a distance of 10 pc. For these conditions, the visual magnitude (mv) of the Sun would be +4.8. Fully illuminated by the Sun, Jupiter would be 21 magnitudes fainter, but, at their maximum apparent separation, Jupiter would appear only half-illuminated. Allowing one magnitude for this effect, we obtain, for Jupiter, mv = 26.8. Hence fp = l.9 x 10-8 photons and photons . The sky flux fs should be less than 10-8 within the Airy disk of the planet image, and we assume the detector noise fn is negligible, too
For our example we let T = 1000 sec, = 1000 Å, A = 4.5 x 104 cm2, and q = 0.25; this yields S/N ~ 8 from equation (D2). Hence, Jupiter could be detected in less than 20 min with a good S/N. For a telescope with an aperture d smaller than 2.4 m, the integration time required to achieve the same S/N increases considerably. For smaller values of d, the main source of noise will most likely be scattered light from the star, so in equation (D2) we can make the approximation If all other factors are constant, the integration time. Hence about 80,000 sec (I day) of integration time would be required if a l-m telescope instead of a 2.4-m telescope were used for the above example.
Another consideration for the required integration time is the distance of the planet from its star. Again, we make the approximation and let + = a/D, where a is the semimajor axis of the planet's orbit and D is the distance to the star. Letting = , we find the integration time . Hence, the required integration time will be less for planets farther away from their star, but the dependence is not nearly so strong as for the other variables.
After a faint image is detected near a star, several criteria can be applied to confirm that it is a planet. First, its intensity and distance from the star must lie within acceptable limits. Then, from several months of repeated measurements, the proper motion of the suspected planet could be checked against that of the star. If they agree, this would be strong evidence that the body is indeed a planet. Depending on S/N, it might be possible to measure a crude spectrum of the planet and search for the deep methane absorption bands in the far red, characteristic of Jupiter and the other large planets of our solar system.
 If the planet's orbit has a favorable orientation and S/N is sufficient, we can determine the inclination, eccentricity, and semimajor axis of the orbit by measuring the position of the planet relative to the star through one orbital period (12 years for Jupiter, 30 years for Saturn). The solution for these orbital parameters (and the mass of the star) would use the same technique as for visual binary stars of known parallax (ref. 79).
From the calculations above, we can be optimistic about detecting extrasolar planets with a 2.4-m telescope in space and its proposed state-of-the-art instrumentation. We note that 37 nearby stars are brighter than mv = +4.8 and are closer than 10 pc. Hence, the detection of a Jupiter-type planet orbiting any of these stars would be no more difficult than our Jupiter-Sun example. For the brightest and nearest stars, S/N should be substantially greater than in our example.
Several technical problems must be investigated before we can be confident that a space telescope used with an occulting edge would achieve the desired sensitivity for the detection of extrasolar planets. An adequately sensitive photon-counting area photometer must be constructed and the calculated starlight attenuation factors (eq. (D 1 )) must be demonstrated to be achievable in practice.
Another important consideration is the orbit of the telescope. The near-Earth orbit planned for the space telescope will not be suitable because (1) the lunar aspect seen by the telescope will not be sufficiently free from earthshine, (2) the relative velocity of the telescope and the lunar limb will be too great to allow the appropriate alignment to be maintained for the necessary integration times, and (3) most of the 37 stars that are within 10 pc and are brighter than mv = +4.8 would not be occulted by the Moon.
To overcome those difficulties, an orbit for the telescope comparable in size to that of the Moon is needed, and the telescope orbit must be changed from time to time -perhaps by solar sail -- to achieve the proper alignments with the lunar limb. Choosing an optimum strategy for the telescope orbit and its required changes presents an interesting, difficult, and important problem.
 If the S/N estimates presented here can be substantiated by laboratory tests, then we should determine what would be required to raise the orbit of the space telescope, at some point during its lifetime, in order to search for extrasolar planets. Meanwhile, the necessary experience for operating a telescope in space with the black limb of the Moon as an occulting disk could be gained by operating a smaller telescope in the required orbit configuration. In addition to preparing for planet detection, such a mission would seem capable of many important astronomical observations. For example, a 1-m telescope equipped with a simple optical photometer in high orbit could be used to record lunar occultations of virtually any object in the sky. The additional advantages of the black lunar limb and much slower occultation rates would permit the measurement of many more stellar diameters (ref. 80), binary star separations, and the angular diameters of quasars.
In addition, the large parallax allowed by a high orbit would permit many more stellar occultations by solar system objects to be observed. Some of the exciting work that could be accomplished is: (1) further investigation of the newly discovered rings of Uranus (refs. 81 and 82); (2) search for rings around Neptune and Jupiter; (3) investigation of the optical depth of Saturn's rings with resolution of a few kilometers; (4) determination of diameters of Pluto, satellites, and asteroids; (5) further investigation of the tidal waves in the Martian atmosphere (ref. 83); and (6) acquisition of many temperature, pressure, and number density profiles of the upper atmospheres of all planets, including Earth (ref. 84).
Such a dedicated occultation telescope could have an extremely productive lifetime in the course of perfecting techniques that would ultimately be used for detecting extrasolar planets.