SP-480 Far Travelers: The Exploring Machines

 

Tracking, Communications, and Data Acquisition - A Revolution

 

[157] At an Apollo 11 victory banquet, master of ceremonies Joe Garigiola recited a "Yogi-ism" he attributed to his friend Yogi Berra: "If you don't know where you're going, you'll end up somewhere else." The truth in that bit of humor certainly applies-to missions using unmanned spacecraft. Tracking and position determination are absolutely vital to the process of exploring distant targets, for it is essential to know where a spacecraft is and where it is heading in order to direct it to its destination.

Before the days of advanced radio communications technology, we would have been forced to use onboard celestial navigation principles-star trackers, sextants, and traditional navigation techniques-in conjunction with accelerometers to compute position, speed, and direction. The weight, power, and accuracy of such systems would have depended on a number of tradeoffs, and the complexities of developing and testing long-lived systems were many. Fortunately, advances in radio tracking technologies during and after World War II enabled us to use relatively simple, lightweight spacecraft systems in conjunction with large ground installations to obviate the onboard complexities of self-contained navigation systems.

A second essential in the unmanned spacecraft equation involves communications, both to deliver commands to a spacecraft in flight and to receive information about its findings in space. In the 1960s it was not possible to plan a journey of several months to a distant planet using only preprogrammed intelligence for the spacecraft; commands had to be planned from the outset. Since these spacecraft always went on one-way trips, they would have been of little use as emissaries for man if they had not been able to communicate their findings with accurate and interpretable information.

Thus, the pacing technologies for lunar and planetary missions included tracking, communications, and data acquisition capabilities. Position determination and communications functions were always combined, because the common elements of their radio disciplines bound their designers together.

[158] For early space missions, specialized tracking and data acquisition systems were developed in parallel with each spacecraft and its instrumentation; however, it was soon recognized that it would be best if ground facilities for performing these functions could be designed and built to serve a number of projects. The concept of a Deep Space Instrumentation Facility (DSIF), combined with a Deep Space Network (DSN), emerged as a standard for meeting the needs of the many lunar and planetary missions and evolved over the years to support a spectrum of flight projects.

In summary, deep space missions require several basic types of support from the tracking, communications, and data acquisition facilities. For commands necessary to ensure control of spacecraft, there is a normal or routine capability, an emergency capability that usually is engineered to work under off-design conditions such as low power or unusual attitudes, and an emergency weak signal mode that allows searches for lost signals or for recovery from out-of-sync conditions. Radio navigation instrumentation is essential for determination of trajectories or orbits. This usually involves Doppler transponders and accurate pointing capabilities. For data acquisition, there are usually high and low bit rate modes to accommodate the differing requirements of continuously monitoring engineering data or interplanetary phenomena that do not vary rapidly, as well as the high bit rate requirements for imaging systems and encounter instrumentation. There are also special requirements for so-called radio science experiments that use radio signals and analyze changes in them caused by atmospheres and the interplanetary medium.

The amazing quality and performance of the NASA tracking and data acquisition systems cannot be recalled without giving credit to Edmond C. Buckley and Gerald M. Truszynski, who came to NASA Headquarters from Langley to lead the development of this enormous system. Ed Buckley was for a time Assistant Director for Spaceflight Operations under Abe Silverstein and was later the head of the Office of Tracking and Data Acquisition until his retirement in the 1970s. He came to Washington as a very experienced NACA engineer with an extensive background in telemetry and tracking system development and operations. One of his major efforts involved development of the Wallops Island range, a rocket launching, free-flight test facility that was built for free-flight transonic aerodynamics tests after World War II. Gerald Truszynski, who later replaced Buckley, had similar experience and continued the advance of capabilities, including tracking and data relay satellites and other innovations.

[159] Although completely different by nature, Ed Buckley at Headquarters and Eb Rechtin at JPL respected each other and got along well. They provided an excellent example of headquarters and field center counterparts leading developments for a required technological base while managing programs and dealing with administrative chores. The telecommunications functions they provided were prime ingredients in the successful exploration of space. Just as impressive was their remarkable job of satisfying users' needs. From a program office point of view, working with these people-who never got their share of credit because of the supportive nature of their task-was indeed a pleasure.

Earth-orbiting satellites had been successfully tracked and interrogated by stations located within the United States, although one or two stations in the southern hemisphere helped. The fact that a low-altitude satellite came into view every 90 minutes made it relatively easy to track its location. Of course a satellite in Earth orbit was almost like a train on a railroad track; it tended to retrace the same general path in inertial space, orbit after orbit. For tracking lunar and planetary spacecraft, however, the process would be more like tracking celestial bodies, because a single station on the rotating Earth could see a distant spacecraft only during one-third of a day. This would not allow sufficient coverage to monitor critical functions and to transmit commands. Had ground stations been located only in the United States, very complex tradeoffs would have been necessary for timing events when the stations were in view of the spacecraft.

It did not take long for Rechtin, his principal system designer, Walter Victor, and the engineers at JPL to develop a plan for a network of three stations located approximately 120° longitude apart, so that one of the three would always be in view of any spacecraft. Obviously it was desirable for the principal station to be near JPL, if possible, and in the spring of 1958, a remote site suitable for a sensitive receiver (free from manmade radio interference) was located in a bowl-shaped area at Camp Irwin, an Army post some 50 miles north of Barstow, California, in the Mojave Desert. As this was a government reservation, it was not difficult to obtain approval to use this site. The problems associated with selecting sites and implementing plans for the other two network stations were more difficult, as one of the sites selected was in a dry lake bed near the Woomera test range in East Central Australia and the other in a shallow valley near Johannesburg, South Africa. Approval for these sites, of course, required the Department of State to work out arrangements with the respective governments, including construction [160] and staffing with nationals. During the 1960s, concern occasionally arose over the permanence of the South African site because of political unrest and the relationship of the South African and United States governments. In 1965, an additional site was established at Madrid, Spain, to backstop this uncertain condition and to ensure coverage near the Greenwich longitude.

The signals received by small radios with built-in antennas often come from commercial stations with as much as 50 000 watts of broadcast power. In the case of the early Ranger and Mariner spacecraft, only 3 to 4 watts of transmitter power were available. This placed a significant burden on the ground receivers to make sense out of the very weak signals. To acquire and sort out weak signals from random galactic background noise and manmade radio signals bouncing around Earth, tracking antennas had to be very large and highly directional. This meant that they had to be accurately steerable, for gathering the weak signal depended on their ability to focus on that single source. Most of the Earth satellite tracking antennas were driven by what were called "Az-El," azimuth-elevation drive systems, so that the coordinates were simply derived as normal and parallel to the surface of Earth at the point, and antennas were driven in two axes.

 


Location of Deep Space Network stations.

Location of Deep Space Network stations.

 

[161] Two principal members of the JPL staff, Robertson Stevens and William Merrick, borrowed an antenna design from radio astronomers to deal with this matter. They chose a parabolic dish 85 feet in diameter, equipped with an equatorial polar mount based on astronomical requirements for tracking celestial bodies. The gear system that moved the antenna was polar mounted; that is, the axis of the polar or hour-angle gear was parallel to the polar axis of Earth and thus pointed toward the North Star. This gear swept the antenna in an hour-angle path from one horizon to another. The declination gear wheel, the smaller of the two gears, was mounted on an axis parallel to Earth's equator, thus allowing the dish to pivot up and down. The gears could be moved either separately or simultaneously to provide precise tracking. The equatorial mounts used for the deep space dishes were better suited for tracking interplanetary spacecraft; they allowed principal movement around only one axis, since the rotation of Earth provided the other.

The standard ground station antenna was a large parabolic reflector-a perforated metal mirror that looked like an inverted umbrella and was usually called a "dish." The antenna and its supporting structure stood 10 to 20 stories high and weighed hundreds of thousands of pounds. Since the antenna had to point directly at the object being tracked to receive the strongest signal, a servo system normally operating in a feedback or slave mode was used. Pointing angle information was based on trajectory data predicted by computer in advance and then updated by actual trajectory data obtained during a mission. All parts of these antennas were so precisely balanced and aligned that, in spite of their weight, they could be rotated very sensitively, with only small deflections or vibrations that might cause the signal to be fuzzy.

Astronomical antennas that were the starting point for deep space tracking and data acquisition antennas did not have two-way communication capability, for there was little reason to broadcast commands to the stars. Thus it was necessary to provide the communication transmitters and the feeds that would allow the dishes to serve as transmitting antennas as well as receivers. This called for diplexers to permit simultaneous transmission and reception using a single antenna. Added capabilities were referred to as the uplink and downlink functions. Devices were also added to the antennas for tracking the spacecraft of interest and for "closing the loop" in the sense of driving the antenna-pointing mechanisms.

Tracking requires two parameters: (1) a measure of angular displacement for the spacecraft with respect to a reference system on Earth and (2) [162] measurement of the distance from the tracking antenna to the spacecraft. The angular measurements can be obtained by accurately calibrating the directional pointing system for the antenna as a function of its drive actuators. The distance measurement is based on the Doppler principle, well known for its use in determining the relative speed of a celestial body or a star with respect to Earth. The so-called Doppler shift is really the apparent change in frequency of a signal reflected from or emitted by a moving object as the object moves toward or away from the observer. Everyone has experienced the Doppler effect: the whistle of an approaching train sounds high pitched, and the pitch drops as the train passes. The same thing happens to radio signals, and it is possible to accurately determine rate of change in distance by measuring the frequency shift.

Early spacecraft used one-way Doppler; that is, signals from the spacecraft were transmitted to the ground and changes in frequency were measured in the same way the sound from a train whistle might be measured for its change in frequency. This technique depended on knowledge of the precise transmitting frequency of the spacecraft; its accuracy was limited because frequencies were always subject to change. Two-way Doppler was developed to increase accuracy from about 90 feet per second to as little as 1 inch per second. The concept of two-way Doppler is simple: a precise signal transmitted from the ground is received by the spacecraft transponder and retransmitted at a new frequency in a precisely known ratio to the one received. This allows measurements of frequency change in the signal on the way up and on the way down, tremendously increasing the precision of the Doppler information and the velocity calculations. Using two-way Doppler, the distance to a spacecraft several million miles away could be determined within 20 to 50 statute miles. Later, an automatic coded signal in conjunction with the Doppler information provided measurements with an accuracy better than 45 feet at planetary ranges.

Because the Doppler shifts due to changes in the velocities of spacecraft varied widely, receivers had to be continually tuned to a narrow range of frequencies. This was a troublesome problem until a technique was found that provided a phase-lock method of signal detection, maintaining an automatic Frequency control and keeping the receiver locked with the received frequency. Thus, even though the frequencies were changing with the speed of the spacecraft and the relative speed due to the rotation of Earth, it was possible to maintain a coherent tracking of the spacecraft under widely varying conditions.

[163] Automatic phase control origins date back to the 1920s and 1930s, but the first known application was in a horizontal line synchronization device for television in the 1940s. Rechtin and R. M. Jaffee showed in 1955 how a second-order phase-locked loop could be used as a tracking filter for a missile beacon, and specified how to cope with Doppler signal shifts in weak noise. W. K. Victor further developed the theory and practice for automatic gain control for such closed loops in conjunction with his many other contributions to spacecraft tracking.

According to senior JPL engineers, the transition from vacuum tubes to solid-state technology was not without trauma. It is understandable that a change from such a highly developed and known technology to the mysterious new promise of transistors and diodes caused project engineers many headaches. Most engineers involved with spacecraft hardware were familiar with the shortcomings of vacuum tube technology; vacuum tubes were particularly subject to problems caused by the severe acceleration and vibration environment during rocket launch. However, tradeoffs involved in dealing with known qualities versus the uncertain effects of a new technology were difficult to assess. It took many years to develop confidence in the application of solid-state electronics, even though the principles were proven and understood. Ranger and Mariner spacecraft were among the first to be fully committed to the use of such devices, with the major exception that their power amplifiers were vacuum tube triodes.

Robertson Stevens cited three major factors responsible for the low bit rate that was achieved with Rangers and early Mariners: limitations in power, limitations in antenna size, and low transmitting frequencies. One of the reasons for power limitations in early missions was the fact that transmitters were powered by vacuum tube triode amplifiers which were heavy and inefficient power consumers. It was not until traveling wave tube amplifiers came into use (the first lunar and planetary spacecraft application was Surveyor) that a significant increase to 20 watts was made in transmitter power.

The antenna size was of course limited by the difficulty in packaging antennas to fit within the shrouds on top of boosters, as well as the weight available for such structures.

The frequency limitation was related to several factors, not the least of which was the greater accuracy of antenna geometry required for operation at high frequencies. In addition, there were problems in discriminating and dealing with high-frequency signals. Political factors also influenced the use [164] of new radio frequencies; there was considerable international concern over the allocation of the usable radio frequency spectrum.

The early Ranger and Mariner missions operated at L-band frequencies of about 890 to 960 megahertz. During the middle of the Ranger activity, there was such demand for aircraft communications at these frequencies that a changeover was planned to S-band (2110 to 2300 megahertz), making the L-band region of the spectrum available for Earth communication links with aircraft and other users. This change was to have occurred with Ranger 10 and subsequent Rangers which were canceled; the upgrade in frequency was made in 1964 for Mariners 3 and 4.

The conversion to higher frequencies required a major modification of equipment and procedures at all DSN stations; however, this change had merit for space application once the engineering had been done. The simple matter is that, for an antenna of a given size, higher frequencies allow narrower beams, higher gains, and improved performance. The early Explorers used signals in the 100-megahertz region, and the antennas spread data in all directions; the current capability of Voyager at frequencies of 8500 megahertz (almost a hundredfold increase) provides energy 105 times more focused because of the narrow beam. Of course this translates into a burden for accurate attitude orientation or pointing, both for the spacecraft and ground-based antennas. As time passed, it was possible to build larger antennas for ground use that had the stability required for high frequencies in addition to greater collecting areas. The first DSN dishes were about 26 meters (85 feet) in diameter; these were later supplanted by 64-meter (210-foot) dishes with greatly increased signal-gathering capability.

At the time the DSN was being initiated, signals returned from space were amplified with tube amplifiers, which were connected by cable from the antenna and, being large, bulky devices, were housed nearby. Because they operated at high temperatures, they added radio noise to the signals received from space. In addition to their own noise, the cabling and mechanical filaments picked up noise from extraneous sources, so that the total signal-to-noise ratio was quite low. Even though inefficient, these amplifiers could amplify the signals as much as 1012 times the received signal strength; this was of course necessary to make the very weak signals useful.

Early in the 1960s parametric amplifiers were developed. These were applications of solid-state technology and used cooled devices operated at temperatures much lower than the hot elements in vacuum tubes. Parametric amplifiers provided something like a factor of 10 improvement in the reduction [165] of noise and therefore greatly increased the amplification processes for weak signals.

An outgrowth of this cooling amplification technology was the development of the maser, an amplifier that used elements cooled by liquid helium to 4 K, very close to absolute zero. Maser is an acronym for "microwave amplification by stimulated emission of radiation." (I was amused when talking with Stevens, who had been involved in the technology before, during, and after the maser was invented, that he was able only with some effort to recall the labeling for each of the letters in the acronym. This is typical of the problems we engineers generate by using the "alphabet soup" approach for describing things.) The heart of the maser amplifier is a synthetic ruby crystal, immersed in liquid helium to keep it at a very low temperature. It operates with a "pumped-in" source of microwave energy to augment the strength of the incoming signal without generating much internal noise.

I was told an interesting account of maser development involving Walter Higa, a JPL engineer who went to Harvard and worked as an apprentice to the inventor of the maser amplifier concept. Higa returned to JPL and immediately went to work to build a maser for space application. It was obvious that to receive the full benefits of such an amplifier, it should be located at the feed of the antenna, as near as possible to the point at which the signal was collected, thus avoiding the addition of noise by cables that might sense spurious signals or other interference. This meant that the liquid helium cooling system also had to be on the antenna and move with it as it tracked a spacecraft. In the very early application, liquid helium was available only in large vacuum Dewars, and the reservoir on the antenna itself had to be refilled about every 10 hours by a man raised with a cherry picker crane device. After doing this onerous chore for some time, an ingenious JPL technician who had been an automotive mechanic developed a refrigerator system that eliminated this unpleasant duty. His scheme involved a small refrigeration unit with connections from the base of the antenna, providing the generation of liquid helium on the antenna from a source on the ground, so that the operation could be self-sustaining.

Regarding the noise contribution of the system, the maser and the large dish technologies have been developed so well that there might not be much more to gain by further refinements. According to Stevens, an improvement of less than 20 K in noise temperature is theoretically possible. Of this amount, about 4 K is attributable to the background noise of space which cannot be eliminated, about 3 to 4 K is due to maser inefficiencies, and about [166] 6 to 8 K is due to atmospheric effects, depending on the frequency used. O course, the higher the frequency, the better, although weather definitely affects the noise, even at X-band, which is 8500 megahertz. A problem always exists because of the antenna temperature: this is caused by the proximity of Earth and the reflective objects which radiate heat to the antenna. It is estimated that a factor of two in gain might be possible, even if an antenna were located on the back side of the Moon to minimize the heating and noise effects.

One additional trick that is being used to improve capability is called "arraying." This involves the concurrent use of several antennas in the same general region to effectively increase the dish area. By using four antennas in Australia, for example, a data rate of 29.9 kilobits per second can be returned by the Voyager spacecraft when it passes near Uranus in January 1986. And this remarkable rate is achieved using a spacecraft antenna only 3.6 meters in diameter, transmitting signals over a distance of 3 billion kilometers!

An interesting outgrowth of deep space tracking is that the known location of stations on Earth was improved greatly in the process. As a result of the Mariner mission to Mars in 1964, it was estimated that the absolute location of the Goldstone tracking station was improved from an approximate position within 100 meters to within 20 meters. This figure has been improved during subsequent missions to within less than 1 meter.

The way in which station location is determined from the Doppler data may be understood by supposing the spacecraft to be fixed in space with respect to the center of Earth. The only Doppler tone would be caused by the station's rotational velocity along the direction to the spacecraft: therefore, the observed Doppler tone at the station depends on the latitude, longitude and radius from the center of Earth. Since thousands of measurements were obtained during the many tracking passes of the network stations, it was possible to deduce the proper combination of station location errors to match the data. It is also interesting to note that the masses of the Moon and the planets were determined in a similar fashion. In the case of the Moon, for example, the variation in Doppler tone was due to the movement of Earth around the Earth-Moon system's center of mass, or barycenter. Earth makes one rotation around this barycenter every 28 days at a speed of 27 miles per hour. This could be measured accurately by the tracking system.

In every case, the orbit of a spacecraft flying past a planetary body is deflected by the gravity of that body. The amount of deflection, coupled [167] with the knowledge of the distance from the center of mass, allows scientists to calculate very accurately the mass of the body in question. Tracking data obtained from Lunar Orbiter spacecraft produced data which allowed the scientific discovery of mass variations within the body of the Moon. Paul M. Muller and William L. Sjogren were able to use the accurately determined variations in the track of Orbiter around the Moon to identify mass deviations and even to locate them approximately beneath the surface of the Moon. These anomalous concentrations, called "mascons," were discovered to be present in the great circular mare basins, suggesting that large chunks of heavy material may have sunk into a plastic, perhaps molten Moon until the gravity field was restored to equilibrium. The findings were not only of interest scientifically, they were also significant for planning Apollo missions to the Moon, because the mascons definitely affect the orbital and trajectory parameters of lunar spacecraft.

The radio signals used for tracking purposes have also served a number of additional scientific studies. Whenever a spacecraft flew past a planet in a way that caused the radio signals to pass back through its atmosphere, the attenuation and distortion of the signals allowed a great deal of deduction about the nature of the planet's atmosphere and ionosphere. Such experiments gave the first definitive information about the atmospheres of Mars and Venus.

Although direct communication links with spacecraft were prime considerations, it must be remembered that a large, Earth-based complex was involved in the total process. Included were the Space Flight Operations Center colocated with the Space Flight Operations Facility, the Launch Control Center colocated with the launch facilities at Cape Kennedy, certain Atlantic Missile Range stations, and an interconnecting ground network of radio and telephone systems. In many cases, getting data back to JPL after its receipt at a Deep Space Station presented significant challenges. Problems often developed with leased landlines or transoceanic communications-problems that were made more difficult because of the coordination involved. The curious anomaly of being able to communicate millions of miles between planets with greater assurance than from points on the surface of Earth always puzzled me.

In recalling mission activities during years of association with lunar and planetary programs, it seems to me that the telecommunication systems probably were the most dependable of all. I know of no major difficulties resulting from technological mishaps or from overestimating the capability [168] of the tracking, data acquisition, and command process. In discussing this subject with a respected JPL project engineer, he offered the opinion that the telecommunications guys always cheated in the game of balanced design margins for spacecraft. He said they made a practice of computing margins based on the simple addition of all factors and were never forced to use the statistical probabilities that most other engineering tradeoffs involved. As a result, he thinks they normally enjoyed greater margins and were able to do more than was predicted. If he is right, this practice may have resulted in some unfavorable design compromises in other areas; however, it always made me feel good knowing that we could count on telecommunications operations to produce the promised performance.


previousindexnext