In its first forty years, Goddard successfully launched more than 200 Earth-orbiting satellites. That is an amazing success record, and yet the very fact of that success makes it easy to take the magnitude of that accomplishment for granted. Because the Center's satellites were so successful, few people really give much thought to how difficult it is to build, launch, stabilize, power, and operate a satellite or.collect and transmit its data back down to Earth.
Yet the truth is that conducting research in the harsh and remote realm of space is a staggeringly difficult task, and every successful NASA mission is the result of the sweat and ingenuity of thousands of engineers, scientists, and support personnel who designed and built the tools to make it possible. Even today, the job  is a demanding one as we stretch to design more capable satellites and take on more challenging missions. But in the early days of the space program it was a truly Herculean task. Little was known about not only rocket and spacecraft technology, but also the environment in which these tools would have to operate. The early engineers and scientists in the early space program didn't know what materials would work best, how they needed to be assembled to do the job, or what obstacles the tools would have to overcome. In addition, scientific satellites had one other liability. Until the advent of the...
...Space Shuttle and modular, servicable spacecraft, satellites could not be fixed by astronauts in space. All the potential problems of a mission had to be anticipated and fixes built in ahead of time. Once something was in space, it was difficult to change.
The Challenge of Space Flight
Scientific satellites, from the earliest Explorers to the most complex modern observatories, have two critical aspects to their design. First, there are the scientific instruments that collect the actual data from space.Second, there are the "housekeeping" systems that operate the spacecraft and get that data back to Earth. The housekeeping systems have to provide power, temperature modulation, and control of the spacecraft, and allow for data reception and transmission. This sounds pretty straightforward, but providing these services in a lightweight package in space is an extremely difficult task.
Satellites are complicated vehicles to launch and control. The launch vehicles themselves have to be programmed to follow intricate computer-calculated trajectories that will place a satellite in a very precise and specific orbit. Some satellites are launched so that they will orbit north to south, over the poles of the Earth, while others follow a more equatorial orbit. Still others are launched into a geosynchronous orbit, which means that the spacecraft will stay "parked" over one spot on the Earth. In order to follow a geosynchronous orbit,  however, a satellite has to be much further away from Earth. While the Space Shuttle and many Earth-orbiting satellites are positioned about 200 miles above the Earth, a geosynchronous satellite orbits almost 23,000 miles away.
Whether orbits are close or far, however, they have to be achieved with extreme precision. A polar-orbiting satellite, for example, might be designed to have an orbit with "an inclination of 101.56 degrees and a period of 115 minutes." An Interplanetary Monitoring Platform (IMP) satellite launched in 1964 failed because the final stage of its launch vehicle burned for one second shorter than it should have. That one second loss resulted in an orbit only 50,000 miles high instead of the planned 160,000 miles.1
To reach and maintain orbits with this kind of accuracy is not easy. NASA launches its rockets on a coast so that failed or discarded rocket stages will fall harmlessly into the ocean. But the reason most of NASA's launches take place from Cape Kennedy on the east coast is that the Earth rotates to the east, helping the satellites gain orbital speed. Polar-orbiting satellites are an exception to this rule. They typically are launched from NASA's Western Test Range at the Vandenberg Air Force Base in California because they can climb into a polar orbit over water if they launch from the west coast.
In either case, when the launch vehicle reaches the correct point and altitude for the orbit the researchers want, another rocket must fire correctly to kick the spacecraft into its orbital path. Most satellites also have an additional propulsion system on board in case their orbit needs to be adjusted or changed. And even in a stable orbit, many satellites use small intermittent chemical rockets, high-pressure gas jets, or electric currents to maintain a particular attitude and orientation.
Although some of the very earliest satellites simply spun around as they made their orbit, researchers soon began looking at ways they could stabilize satellites. Spin-stabilization was critical for providing good pictures of the Earth, for example, as well as for astronomical research, where the satellite needed to keep looking at one particular object for a length of time. One step further in complexity was to not only stop the spacecraft from spinning, but also to keep one particular side of it facing the Earth throughout its orbit. In many cases, scientists had to know where the satellite was pointing in order to  evaluate the significance of what it was seeing or the data it was receiving.
Over the years, satellites have been designed with various gyros, de-spin devices and pointing mechanisms to accomplish these ends. One device to stop a satellite from spinning, for example, was called a "yo-yo," because it employed the same technique as the children's toy. String-like devices would deploy in the opposite direction to the way the satellite was spinning, slowing it down. To keep a satellite pointed in one direction, engineers often employ Earth-tracking or star tracking systems. Star trackers fix on particular stars and send commands to the control units to adjust the spacecraft if the position of those stars drifts relative to the satellite. Earth tracking systems work the same way, except they use the curve of the Earth instead of stars as their reference point.2
Of course, each of these systems had to be developed, and none worked perfectly the first time. Indeed, even today the...
....stabilizing mechanisms in satellites can fail, causing them to "tumble" and go out of commission. A recent such failure with a commercial communication satellite, for example, caused 90% of the hip-pocket "pagers" in the United States to stop working for more than 24 hours. While the satellite was not a NASA spacecraft, the incident underscores the challenges engineers still face in stabilizing and operating satellites in space.3
A satellite's instruments have to be controlled, as well. The individual instruments must be able to "talk" to each other and to the data and communication functions of the spacecraft. In some cases, individual instruments have to be turned on and off at various times. Other instruments have to be kept from ever pointing directly at the Sun. Instruments also have to be calibrated for accuracy and that information has to be linked with the actual data collected when it is sent back to the ground.
All of these operations have to be remotely controlled from Earth, which means the satellite has to be able to receive commands from ground stations. By the same token, the satellite has to have a way of recording the data it's collecting, putting it in a format and frequency that can be transmitted, and sending it back to stations on Earth. Because few of these transmitters or sensors existed on the market, the Goddard and industry engineers working on early satellites often had to develop the technology themselves. Goddard's achievements in the development of microchip technology for space applications,  for example, stemmed from its need to make spacecraft components as lightweight as possible.
To run all these systems, a satellite needs a way of generating power for the months to years it's in orbit. Most spacecraft rely on solar cells to recharge on-board batteries. But solar panels have their own complications, ranging from deployment of the arrays and the need to keep the collecting side of the panels pointed at the Sun to the basic problem of packing large panels into a tiny space aboard the satellite until it reaches orbit. The cost of making the panels flexible and lightweight is that they also tend to be somewhat fragile, and several satellites have had to cope with damaged solar panels once in orbit.
In addition, all of the satellite's systems have to work in the extremely harsh environment of space, where temperatures away from the Sun are nearly absolute zero, and temperatures facing the Sun climb as high as 1200 degrees Kelvin. Thermal dynamics, therefore, is a critical issue in both spacecraft and instrument design. On one of the early Orbiting Geophysical Observatories (OGOs), for example, the spacecraft's attempts to compensate for the extreme temperature differences between the front and back sides of long booms extending from the main spacecraft caused serious problems with the control system. Engineers finally figured out that they needed to drill holes in the booms to allow some solar heat to reach the back side in order for the system to work.4
Spacecraft also have to operate in a zero-gravity vacuum, which creates its own set of difficulties. For one thing, a vacuum creates problems with dissipating heat, because the heat can't be carried away by passing air. In addition, some parts of a satellite are soldered together. It's not uncommon to get some small remnants of soldering material that break off as a soldered object is moved around. In a television set, that is not a big deal. But in a....
....zero-gravity environment, those soldering balls can float all around the spacecraft, causing a variety of problems. Even worse, they can cause a problem like a short-circuit and then float away again, so engineers trying to trouble-shoot the system can't even find evidence of what caused the problem.
There are other difficulties, as well. High-voltage instrumentation has to be either turned off or protected during its passage through the electrically-charged ionosphere and for the first few hours of its orbit while the satellite "out-gasses" the trapped molecules from the Earth's atmosphere so the high-voltage terminals don't arc and short-circuit. In astronomical satellites, a single fingerprint on a lens can render the instrument useless. The sensitivity of satellites to even the tiniest specks of dirt or grease is why they are built and tested in special "clean rooms." Goddard has several of these facilities, including one large enough to house several satellites the size of the Hubble Space Telescope.
Another inherent problem in building any aspect of a satellite, especially in the early days, was the tremendous constraint designers faced on power and weight. The key to success was lightweight construction, which meant systems were not as robust as they could be for an Earth-bound machine. Tape recorders tended to be very temperamental because of their many moving parts, and more than one satellite ended up having to transmit its data in real time because its data recorders failed. Power was also limited, even with solar panels and batteries on board, in part because satellites had to be so small and lightweight. But trying to force large amounts of data through the systems on little power created other problems, such as a tremendous amount of heat which then had to be dissipated somehow. Indeed, engineers who worked at Goddard in the 1960s say the challenge of space came down to batteries and tape recorders, and reliability was achieved only through redundancy. Because systems were prone to difficulties or failure, engineers and scientists always tried to include back-up systems in a satellite's design.5
All of this is to say that designing and operating a satellite is an extremely difficult task, even when everything works well. So the fact that Goddard has successfully launched and operated over 200 satellites to date is an amazingly impressive feat.
The difficulties involved in building satellites also meant that it has always been a struggle to keep a developmental satellite project within  its initial budget. In the early days, a "good" project, according to former director John Townsend, only overran its budget by 30% or so. A "bad" satellite project could overrun as much as 200 - 400%.6
In part, these overruns were a product of the conflicting pressures inherent in any space project. Managers have to balance the demands of schedule, budget, and reliability, and all three are difficult to attain. A project can be kept on schedule and budget, but reliability may suffer. If the goal is to make a spacecraft absolutely reliable, it may take additional money or take longer than scheduled to test and complete. And if a project absolutely must launch on a particular date, its cost or risk of failure may go up or reliability may go down. In forty years of space exploration, this triad of opposing pressures - cost, schedule, and risk - has never been completely resolved. Indeed, the acceptance of the fact that it cannot be resolved is a recognition of the nature of the enterprise. Each project simply falls in a slightly different place within the triangle.
Another reason it was difficult to keep scientific satellites within a predetermined budget is that in a research and development field, scientists and engineers cannot predict what obstacles or difficulties they are going to encounter. And with the difficulties inherent in designing instruments and spacecraft, the opportunities for problems were almost unlimited. In addition, the scientists often changed their requirements or developed "better" ways to make an instrument more effective or to get more instruments into a spacecraft.
There was, in fact, a constant but healthy tension between the scientists, who would have put every bell and whistle on a spacecraft to get as much data as possible, and the engineers, who were more interested in making sure the instruments and spacecraft worked correctly. This tension was formalized in Goddard's system of assigning both a project scientist and a project manager to each satellite project. The project scientist was responsible for the science requirements and the data the experiments would gather, and the project manager (usually an engineer) was responsible for making sure the overall system worked, as well as managing the logistics, manpower, budget and schedule for the project.
Even then, there was never quite a consensus on how the logistics and problems of a project should be solved. There were two schools of thought, for example, regarding spacecraft schedules. Dr. Harry Goett, Goddard's first director, was a firm advocate for giving projects as much time as they needed to get the satellite right.
 "We've waited two thousand years to get this data," he would argue to Headquarters. "We can wait another six months to get it right." On the other hand, some argued that the delays were caused by scientists wanting to constantly upgrade equipment instead of making do with what could be done in the time and money allotted. The end result was that there was always a pull between the constraints of budget and time and the risk of pushing research projects too quickly and having them fail in orbit. Space projects are inherently expensive, and the most expensive factor is the work force attached to it. If a project is delayed six months because of a late component, the project team still has to be kept together, even though their time is not being well spent. Therefore, the cost skyrockets above its budget. Finding a successful balance is tricky, and not every project was able to do it.7
Indeed, if something goes wrong with a satellite, it's an extremely daunting challenge, because most satellites are unreachable except by remote command. Goddard engineers learned early to incorporate something called a "safe hold mode" into their satellites so that, in the event of a problem, the non-essential systems in the satellite could be "frozen" and its solar panels turned toward the sun to keep power to the spacecraft until the problem could be solved. This technique saved many satellites that otherwise would have lost power before corrective commands could be sent up to the spacecraft.8
Not every problem would cause the complete loss of a satellite, but the consequences of any failure or problem were severe, because it was difficult to fix anything in space. As a result, Goddard's managers always put a tremendous emphasis on rigorous testing and evaluation of components and spacecraft before they were launched.
Test and Evaluation
After spending several years on a single project, no scientist or engineer wanted to lose either a key instrument or an entire satellite because of a faulty battery, control system, or connection. As a result, developing thorough test and evaluation facilities and procedures was a high priority from the earliest days of Goddard.
The pressure to get satellites into space, and therefore to develop test facilities, was very intense in the early days of the space race. The first test buildings at Goddard were built in a mere 18 months,.....
....and test engineers began working in the buildings before the structures were even completed. The engineers simply moved in section by section, right behind the construction crews.
The basic idea behind the test facilities at Goddard was to simulate the conditions of launch and space as closely as possible. The satellite and its components would be put in a vibration machine to simulate the rough and tumble conditions of a launch and put in a vacuum chamber to test the operation of its systems and instruments in a space-like environment. Spin-stabilized satellites were spin-balanced, like an automobile tire. A "launch phase simulator," .....
....which was built around a large centrifuge machine, was sometimes used to simulate the vibration, acceleration and decreasing air pressure a satellite would experience during its launch into space. Another test unit could vary the magnetic field around the component or spacecraft to test the operation of instruments designed to measure magnetic fields and their influences.
Goddard's test engineers even went so far as to create an artificial Sun to test satellites in a thermal vacuum chamber. Based on solar measurements taken by a couple of early satellites, they assembled two megawatts of light (the equivalent of twenty thousand 100-watt light bulbs), focused through a series of reflectors into a concentrated beam. That "Sun" was then placed at the top of one of two forty-by-sixty foot vacuum chambers housed in the test facilities at Goddard. These "space chambers" were so large that they were built first, and the rest of the building was constructed around them.
 The possibility of a launch failure, especially in the early days, was great enough that Goddard developed a policy of building a prototype and two flight models of any given satellite. If the first one or the launch vehicle carrying it failed, the team could quickly launch the back-up model. In the case of the second Orbiting Astronomical Observatory (OAO II), the back-up satellite was actually a prototype that had been on display at the 1964 World's Fair in New York City. Engineers brought it back to NASA after the first OAO failed, refurbished it with new experiments and launched it successfully - which speaks to the amazing quality and reliability of even the satellite prototypes Goddard built. Indeed, Goddard was so concerned about building spacecraft correctly that it always had a separate division or directorate dedicated to "systems reliability." If Goddard's satellites had an impressive success rate, it was because reliability and quality assurance were always such a high priority at the Center. In fact, Goddard's satellites were so reliable that eventually not only the spares, but also the prototype models of a spacecraft, upgraded for flight, began to be launched into space as a matter of course.9
In either case, all components and spacecraft were thoroughly and rigorously tested before launch. Scientists were not always comfortable with this approach, preferring that a non-flyable prototype model be tested and shook and the flight model left alone. But the test engineers at Goddard were insistent that any one  satellite could have flaws and better to find the problems out on the ground than in space. To illustrate this point, the test engineers often would treat a satellite in a space chamber as if it were really in space. If a researcher said, "Oh, I know what the problem is. Take it out and let me fix it," the test personnel would shake their heads, replying, "It's in space. You can't touch it. Now what do you do?" It was their way of helping to develop the necessary mind-set for space science along with the hardware necessary to accomplish the job.10
Yet the meticulous and rigorous testing paid off. Between 1959 and 1976, Goddard had a 100% success rate for the 31 contractor and Goddard-built satellites it tested in its own facilities.11 The Explorer satellites, many of which were built in-house at Goddard, had a particularly impressive success rate. In the 1960s, every Explorer that was properly placed in orbit by its launch vehicle...
 ...achieved its mission. As one of NASA's early managers summarized, "Explorer satellites were simply expected to succeed, and they did."12
Of course, there were problems and there were failures. Usually, the only things lost were weekends at home, sleep, pieces of hardware, and data. But in 1964, three men were killed and several others seriously injured in an accident involving an Orbiting Solar Observatory satellite. The satellite was completing some final pre-launch testing in a hangar at Cape Canaveral in Florida when the accident occurred. The OSO had just been been mated to the third stage of its launch rocket when spark of static electricity caused the rocket to fire. It was a sobering reminder that even if the spacecraft contained no people and was tested as thoroughly as possible, this was still a potentially dangerous business.13
Of course, the spacecraft itself is only part of the equation. Something has to get the instruments into space (or, in some cases, the upper atmosphere). The research conducted at Goddard over the years has relied on a number of vehicles to do that, ranging from aircraft, balloons, and small sounding rockets to large intercontinental ballistic missile-sized launch vehicles and the Space Shuttle.
Sounding Rockets, Balloons and Aircraft
Space science research began right after World War II with what became known as "sounding rockets." Sounding rockets were so named because as they passed up and back through the atmosphere, they could make measurements at various altitudes in the same way as sounding equipment tested various depths in the ocean. They couldn't achieve orbit,  but they could reach high altitudes in the atmosphere or space for short periods of time.
Even after satellites became an option for scientists, they still continued to use sounding rockets for various types of research. For one thing, sounding rockets can make measurements in a regime which is difficult to access by either aircraft or orbiting spacecraft. For another thing, the smaller, less powerful sounding rockets were, and still are, a much cheaper way of performing some research. In a sense, sounding rockets were 'better, faster, cheaper,' thirty years before NASA adopted the saying as an organizational philosophy.
As a result, sounding rockets can provide a testbed for mew measurement approaches or instruments. An experiment is sometimes initially put on a sounding rocket. If it turns up something interesting, a satellite project is then planned to gather further data. By the same token, many instruments designed for satellites are first tested on less-expensive sounding rockets. Experiment space on satellite projects is also extremely limited, leading many scientists to use sounding rockets as a way to at least get some data in a timely manner. At the same time, sounding rockets serve as a wonderful training ground, giving new researchers an opportunity to conduct hands-on work and get familiar with the requirements and approach necessary for working in space.
Another advantage of sounding rockets is that they can take in situ measurements in areas of the atmosphere orbital satellites only passed on their way to orbit. Consequently, sounding rockets can provide...
...good profiles of density, moisture, temperature, or other parameters throughout the different levels of the atmosphere. In some cases, the payload of sounding rocket flights can be recovered again, although that is tougher if they are launched over the ocean. In one instance, the military helicopter pilots who flew out to recover a Wallops Island payload from the Atlantic Ocean returned and told the scientists that all they had found was a big cylinder with something attached to it floating in the water. Of course, the cylinder was the rocket payload, but it had Sunk by the time the crew was sent out a second time to try to recover it.14
Sounding rockets also can be launched from almost anywhere. When a supernova was sighted in 1987, for example, scientists wanted immediate gamma ray data from the high-energy explosion. But NASA's Shuttle launches had been halted  because of the Challenger accident, and there were no suitable satellites ready to launch on any other vehicles. So a team from Goddard's Wallops Island facility travelled to Australia and launched two sounding rockets with gamma ray detectors to investigate the phenomena.15
One of the reasons sounding rockets can be so flexible is that their range varies greatly. From the early Aerobees and Nike-Cajun rockets, the stable of solid-propellant rockets has grown and expanded. Researchers at Wallops would sometimes take different surplus rocket stages and put them together into new and different combinations, leading to rockets such as the Taurus-Nike-Tomahawk or the Nike-Orion.16 Today, although there are very small meteorological rockets that stay below 100 miles, most sounding rocket launches can reach 180-240 miles in altitude. A lightly loaded Black Brant 12, however, can climb as high as 800 miles above the Earth.17
Sometimes, however, scientists need endurance rather than altitude. In those cases, a scientific balloon or aircraft can provide a better testbed than a rocket. Scientific balloons are sandwich bag-thin polyethylene balloons filled with inert helium. Although launching them can be dicey, as any wind can rip the balloon bag, they offer scientists the opportunity to take instruments up as high as 26 miles for 12-24 hours. Of course, the trajectory is determined by the wind, but they can be launched from almost anywhere. Goddard took over management of scientific balloon launches from the National Science Foundation in 1982 and now launches approximately 35 balloons a year.
Aircraft are much more limited in altitude than either rockets or balloons, but they offer extremely quick turn-around times and are an excellent testbed for many different types of instruments and sensors. At Goddard's Wallops Flight Facility, five different types of aircraft are used to test new lasers, computers, and other instruments for Earth and space science research. In addition, the aircraft can be used for conducting certain types of Earth science research, including the study of ice formations, plant life, and in situ measurements after natural events such as volcanic eruptions. 18
Vanguards and Deltas
Balloons, aircraft and small atmospheric rockets were all available before satellites came into being. The challenge that stood in the way of space flight was getting a rocket that had enough power to get a payload high  enough and fast enough to achieve orbit. Although NASA's Lewis Research Center focused on propulsion and the Marshall Space Flight Center would become known for building the large Saturn launch rocket, Goddard was given responsibility for developing and managing the rockets NASA planned to use to launch Goddard's scientific satellites.
When NASA opened its doors, six out of the seven rockets available for its research came from the military. The seventh was the developmental Vanguard rocket, which was transferred from its original home at the Naval Research Laboratory to Goddard as soon as the new space agency was formed. The Vanguard itself did not prove to be a highly successful rocket. Indeed, the spectacular and humiliating explosion of an early Vanguard test vehicle in December 1957 two seconds after launch was etched into the nation's memory for years to come. A Vanguard successfully launched the Vanguard I satellite into space in March of 1958, but eight out of eleven subsequent Vanguard launch attempts failed.19
The biggest problem with the Vanguard seemed to be the first of its three stages. So researchers at Goddard and the McDonnell Aircraft Corporation, which built the Delta rocket, decided to try substituting the first stage of a Douglas Aircraft Company Thor missile that was being used successfully by the Air Force. The hybrid rocket was designated the Thor-Delta, a name later shortened to simply "Delta."20
The first successful launch of a Delta rocket took place in August 1960. The...
...original Delta's payload was limited to a few hundred pounds for a low-Earth orbiting satellite and around fifty pounds for a geosynchronous satellite, but the Delta team kept trying to improve the rocket's capability. They added small solid rocket boosters around the base of the vehicle, lengthened the first and second stage, gave it bigger third stage rocket motors, and added space for more propellant. Fifteen years later, that capacity had increased to 2,400 pounds, and today a Delta can put up about 4,000 pounds.21
The Delta has been an extremely successful launch vehicle, with very few failures in its 30 year history. But the Delta still almost became extinct in the 1980s  when a new, reusable launch vehicle appeared that was touted as the all-purpose space transportation system of the future - a vehicle more commonly known as the Space Shuttle.
The Space Shuttle
The 1986 Challenger accident may have changed NASA's plans to revert to a single-vehicle launch fleet, but the Space Shuttle still carries a fair number of scientific satellites and instruments into space. In addition to large satellites like the Hubble Space Telescope that it releases into orbit, the Shuttle carries several other types of scientific payloads.
The "Spartan" class of satellite is designed to be released overboard at the beginning of a Shuttle mission. Spartan satellites orbit freely for several days before being...
....retrieved and brought back to Earth at the end of the mission. Smaller, "Hitchhiker" payloads, on the other hand, stay attached to the Shuttle bay, allowing them to use the Shuttle's systems for power, data or communications functions.
Even smaller payloads called "Get Away Specials" (GAS) are packaged into small trash can-size containers in unused corners of the Shuttle's service bay. GAS payloads are self-contained experiments that are not connected to the Shuttle's electrical systems. The idea behind the GAS program was to offer an opportunity for extremely low-cost space experiments. Some of the GAS payloads cost as little as $3,000, making them a convenient way to test instruments in space and making space science available to college, high school and even elementary school students. As of 1997, a total of 138 GAS payloads had been taken into space by the Shuttle.22
Tracking, Data and Communications
The final component of an operational spacecraft system, beyond a launch vehicle and an operating satellite, is a way of getting commands up to the spacecraft and data back down to researchers on the ground. Researchers at the Naval Research Lab realized this even as they began planning for a possible satellite launch in conjunction with the 1957-58 International Geophysical Year (IGY).23 They developed a "Proposal for a Minimum Trackable Satellite (Minitrack)" in April 1955 that suggested a series of ground stations to spot  the satellite in orbit. Because signal strength from the satellite would be weak and launch tracking data might not be entirely reliable, the "Minitrack" network, as it became known, consisted primarily of a "detection fence" of closely spaced stations along the 75th meridian. This would help insure that at least one station would "spot" the satellite as it popped over the horizon.
The Minitrack network became operational in October 1957 with nine original stations, and was put under the control of Goddard in 1959. The network eventually grew to about 11 stations and served as the main tracking network for unmanned satellites until 1962.24
The Mercury Spaceflight Network
The onset of the manned space flight program, however, created much more complicated tracking and communication needs. The satellites were in range of the Minitrack stations for only a few minutes on each orbit. A manned spacecraft had to be tracked continuously and had to have two-way communications available, as well. In 1961, Goddard tracking and data engineers were also given responsibility for designing and managing this more complex network, designated as the Mercury Space Flight Network (MSFN). Goddard's efforts in designing and maintaining this world-wide system created another invaluable center of expertise in the Center and were critical to the success of not only the Mercury missions, but also all the NASA crewed space endeavors that have followed.
The Mercury network consisted of 17 ground stations in locations around the world, from Cape Canaveral, Florida to Woomera, Australia. To cover gaps in between the continents, two ships were also outfitted with tracking and communications equipment and stationed in the Indian and Atlantic oceans. Even then, there were still times during the Mercury flights that the astronauts were out of communication range, although for much shorter periods of time than any of the scientific satellites.25
There were a number of difficulties with getting the MSFN operational. One of the biggest challenges stemmed from the need to work with so many different countries in order to get the stations built and staffed. To get permission to build the station in Guyamas, Mexico, for example, President Eisenhower finally sent his brother to personally ask Mexico's president for assistance. Even  then, the Guyamas station sometimes had to be guarded with troops during missions to keep protesting mobs at a distance.26 Most of the time, however, cooperation was easy to get. Many countries even donated services, time and labor. This was the heyday of NASA and the dawn of an exciting new adventure. Simply put, people wanted to be a part of it.
All of the international stations were networked through a control center at Goddard, which then relayed the information to and from Mission Control at Cape Canaveral in Florida. Even this was a dicey operation at first, because the computers and communications systems in the early 1960s were less than reliable. So, as with the early satellites, reliability was achieved through redundancy. If there were six different voice channels going between Goddard and any given station, the system managers would try to use different cables or lines for each one so that if any one line failed, the others would still work. The system was still questionable enough, however, that....
....flight controllers were flown to each station around the world for every Mercury flight. That way even if the network failed, there would be controllers in contact with the flight at almost all times.
The manned space flight program was pushing the limits of technology in every area, and the Goddard and NASA personnel working on the program were well aware of how marginal their systems were. During the Mercury launches, for example, phone communications were still not reliable between Goddard and the Bermuda tracking station, even though the Bermuda station provided critical information for mission abort decisions. Christopher Kraft, flight director of the manned missions during the 1960s, recalled that "during the launch of an Atlas rocket, we had somewhere between thirty seconds and two minutes after main engine cutoff to decide whether to continue a mission or to abort. Initially, there were very few people who believed that this would be possible."27 The tension of these Mercury launches was especially great because the Atlas rocket used for the orbital Mercury flights was not a highly reliable rocket at the time.
With the advent of the Gemini flights, several things changed. First, the Johnson Space Center in Houston, Texas was completed, and mission control was moved from Cape Canaveral to Building 30 at Johnson. Communications technology also had improved enough that controllers were no longer dispatched around the world. Instead, a secondary mission control center was set up at Goddard with  completely redundant systems to those in Houston. As it was, the Goddard center was the conduit for data and communications between Mission Control, the tracking stations and the spacecraft. But if the Houston system failed, the control center facilities at Goddard would allow the Center to pick up coverage of the mission instantaneously.28
Even with improved technology, the manned missions were always stressful endeavors. Managers at the control center at Goddard would keep one eye on the trajectory data of the rocket, another on the maintenance panels of the computer system for the network, and another on the network connections themselves. Not surprisingly, tension in the control center at Goddard during these flights was every bit as high as at Mission Control in Houston.
Goddard's world-wide network proved its worth on every mission. But during the Gemini 8 flight it proved critical, when the spacecraft carrying astronauts Neil Armstrong and David Scott spun out of control during a practice docking maneuver. The rest of the mission was cancelled, and the network engineers had to find the spacecraft again, recalculate its orbit and re-entry trajectories, and then move a recovery ship to an alternate landing location to rescue the astronauts, all in a matter of hours.29
Yet the Gemini missions were still simpler than the next task facing the manned network - keeping track of a spacecraft all the way to the Moon and back. In addition to the ground stations already in place, Goddard commissioned the modification of two huge supertankers into floating behemoths capable of carrying 30-foot parabolic antennas, increasing NASA's tracking fleet to a total of five ships. In addition, nine KC-135 aircraft were modified with special radar noses and launched to fill in the gaps between the ships and the ground stations.
As one Goddard manager put it, "We had the whole world cranked up in these missions." It was true. And the effort was as much a matter of national pride for NASA's partners as it was for the space agency itself. Many times services and labor were donated to the cause, which was fortunate, because the costs of such a worldwide system would have been prohibitive. As it was, the "phone bill" for NASA's system totalled somewhere around $50 million a year.
The personnel at the international tracking stations were deeply committed to the success of the missions, sometimes going to great lengths to insure they didn't let....
....the network down. On a test flight of the Saturn vehicle that would launch the Apollo spacecraft, for example, the communication lines to the remote Carnarvon station in western Australia broke down. So using frontier resourcefulness, the Australians passed launch information to and from Carnarvon with the help of ranchers at "stations" over more than one thousand miles across the Australian outback, using the top wire of the ranch fences as a makeshift telegraph line30.
That same level of dedication was present at Goddard's control center throughout its history. It is one of the reasons that although there were glitches in the system, the Manned Space Flight Network31 never had a serious problem that affected the outcome of any of the manned missions.
The Satellite Tracking And Data Acquisition Network
At the same time as the manned missions were being conducted, the unmanned satellite program was growing by leaps and bounds, creating new tracking and data problems for researchers, as well. The bigger satellites, including the "observatory" class spacecraft like the Orbiting Solar Observatory (OSO), needed more capable ground equipment than the Minitrack network had.
As a result, Goddard developed a new world-wide web of stations known as the Satellite Tracking And Data Acquisition Network (STADAN), with as many as 21 different sites spread over every continent in the world except for mainland Eurasia. The STADAN stations had improved 40-foot and 85-foot parabolic antennas so  they could handle the larger amounts of data the more advanced satellites were generating. The Orbiting Geophysical Observatory (OGO) launched in 1964, for example, was downloading several full-length books worth of data on every pass over a ground station.32
The STADAN network also had its share of interesting events due to the unique politics of various locations around the world. The South African station was eventually closed because of controversy over the apartheid practices of the country, and the NASA personnel at the station in Tananarive, Madagascar had to be evacuated in the middle of the night after a tense stand-off with the country's dictator.33
The stations also provided a unique opportunity for the nations involved, however, because NASA made an effort to train and employ local workers at all the network sites. These countries then had the expertise and equipment to provide services to commercial satellite companies and networks. They could also run their own communication networks rather than having to rely on foreign personnel.
Spaceflight Tracking and Data Network
As the Apollo program came to a close, the need for such an extensive, separate manned space flight network decreased. So between 1969 and 1973, Goddard gradually consolidated the two separate networks - the MSFN and the STADAN - into a single network of ground stations known as the Spaceflight Tracking and Data Network, or STDN. By 1973, the STDN system incorporated 20 different stations around the world, including one ocean-going ship.
In 1971, the two Goddard directorates that had been managing the separate tracking networks were also reorganized to reflect the changing mission requirements. The new Mission and Data Operations Directorate managed the data processing activities and the computer-based tracking projections of the network, and the Networks Directorate oversaw the internal NASA communications network (NASCOM) and coordinated the operations of the various STDN stations.34 Yet even more dramatic changes were coming down the pike.
Tracking and Data Relay Satellite System
The Apollo missions could be well serviced by ground stations because, aside from the beginning and end of each mission, the spacecraft was a fair distance away from Earth and, therefore, in sight of at least one or more of the widely-spaced ground sites. The Space Shuttle, on the other hand, was going to remain in low-Earth orbit. Keeping in touch with it would be a more difficult task.
The solution, however, was already being tested in space. Goddard managed the development of a series of Advanced Technology Satellites (ATS) designed to test advanced meteorological and communications satellite technology. The geosynchronous ATS spacecraft were  in a good position to track and communicate with anything in a near-Earth orbit because they were positioned 23,000 miles above the planet.
The ATS satellites were not part of Goddard's official tracking and data network. But the NASA networks had never had firm lines of demarcation. For example, although the Deep Space Network (DSN) that tracked planetary probes and distant missions was a separate entity from the MSFN, its antenna were used in helping to track the Apollo spacecraft. So although the ATS spacecraft were not officially part of the MSFN or STDN systems, they were still used to...
....help provide communications for the Earth-orbiting Skylab missions in 1973.35
Goddard's ATS research in the 1970s led NASA officials to look at using geosynchronous satellites as a means of tracking not only the Space Shuttle, but all Earth-orbiting satellites of the future. The result was the Tracking and Data Relay Satellite System (TDRSS).
The TDRSS plans called for three geosynchronous satellites - one positioned over the western hemisphere, one over the eastern hemisphere and a "spare" positioned in between the first two. This would allow the system to provide 100% coverage for  satellites orbiting in an altitude range between 745 - 3,100 miles and 80% coverage for satellites below that altitude. Satellites further away than that would be tracked by the Deep Space Network. As a result, Goddard's extensive STDN ground network would no longer be needed.
There were numerous problems in getting the TDRSS network operational, most of which involved the financial contracting aspects of the project. But a rocket booster malfunctioned on the first satellite after it was launched from the Shuttle and, although NASA was able to use the satellite's small, on-board jets to nudge it into its correct orbit, it was never fully effective. The second TDRS spacecraft was then destroyed in the Challenger accident.
There were numerous difficulties with TDRSS ground system, as well especially with a computerized automatic scheduler that was supposed to coordinate time on the TDRSS satellites for the 20-plus scientific satellites the TDRSS might be tracking at any given time.
In addition, the original goal for the Space Shuttle was to be able to launch a new mission every two weeks, and the TDRSS ground stations at Goddard and White Sands would have to be able to support that kind of demanding launch schedule. That was a daunting goal at a time when it was sometimes difficult to keep the ground system up and running for 24 hours at a time. In the early days, Goddard had two crews working on the system simultaneously - one that was trying to operate the system and a second that was trying to trouble-shoot its problems at the...
....same time. After two years of long hours, seven-day weeks, and much lost sleep, the staff was just getting the scheduler problems resolved and the system up to the two-mission-per-month goal when the Challenger exploded. It was a devastating blow to the staff, who realized the goal they had worked so hard for would never be relevant again. The Shuttle began flying again in 1988, but the program has never attained the frequency of flights its designers originally envisioned.36
A Closing Circle
Interestingly enough, recent changes in technology have led NASA to return at least part of its satellite tracking and data tasks to a ground-based system. The tape recorders on satellites were once one of the weakest links of the system, prone to failure, but the advent of solid-state recorders, a technology developed by engineers at Goddard, has changed that. With increased reliability with on-board data storage,  the need to stay in constant touch with some satellites is decreasing. Ground station technology has also improved, making it much less expensive to operate ground system terminals.
In addition, using TDRSS for downlinking data can be expensive. A satellite does not have to have a very big TDRSS antenna in order to receive command and control orders from ground operators. But to send gigabytes of data back down to Earth requires a much larger and more powerful and complex antenna system. Because NASA is trying to shrink the size and cost of satellites, researchers began looking at other options.
NASA's new Earth Science Enterprise program, for example, will incorporate several large Earth-oriented satellites generating approximately a terabyte of data per day. A terabyte is a staggeringly large number equivalent to 10 to the power of 12 bytes, or a million....
....megabytes. In practical terms, this means that in four months, the program's Landsat 7 and EOS AM-1 satellites will have doubled the amount of information collected on the Earth from satellites since the beginning of the space program. The first of these satellites, called EOS AM-1, will use the TDRSS satellites for both commands and data transmission. But the rest of the satellites in the 15-year program will rely on TDRSS only to uplink commands to the spacecraft. The data will be downlinked to one of five possible ground stations.
Because the Earth Science Enterprise spacecraft will be primarily polar-orbiting satellites, the two main ground stations will be in Fairbanks, Alaska and Svalbard, Norway. A ground station in McMurdo, Antarctica will serve as a back-up facility. Two existing ground stations in the United States - the EROS data center in Sioux Falls, South Dakota and a research and a research center at Goddard's Wallops Island facility - are being upgraded so they can provide back-up support, as well.
The TDRSS satellites also will continue to support the Shuttle missions and a number of large NASA satellite, including the Hubble Space Telescope and the Compton Gamma Ray Observatory. But the advances in technology that are enabling more satellites to rely on ground stations have also changed one of the fundamental issues of satellite research. Once upon a time, the problem was how to get enough data back to Earth. With the Earth Science Enterprise, the problem isn't  getting enough data - it's finding a way not to drown in it.
Transmitting and translating the data received from satellites has always been a tricky problem. In the earliest days, the "data" consisted of audio "tones" sent from the passing satellites. If a sensor found what it was designed to identify, it would emit a different tone than if the substance or force was not present. Satellite systems for transmitting data improved dramatically over the years, but scientific data has always required some interpretation.
Goddard offered scientists three different levels of data from their satellite experiments. Level 0 was fundamentally raw data, with only some spacecraft attitude and orbit information added. Level 1 data included instrument calibration information, and Level 2 data was generally a customized product that processed the information in a particular way for a particular scientist.37
At the beginning, the scientist or scientists who designed the experiments on the satellites got exclusive use of the data until they published their results. The system made a certain amount of sense, because space was a very risky research field. A scientist could devote years to developing an instrument only to have the launch vehicle carrying it explode on the launch pad, so there was pretty much general agreement that they deserved first crack at the results of a successful satellite instrument. When the scientist was "done" with his or her data, the results were cataloged in the National Space Science Data Center (NSSDC) at Goddard and became available to anyone.
Yet in both the space and Earth science fields, individual investigators sometimes dragged their feet in making the data more generally available. In addition, the principal investigators often didn't remove their particular research modifications, or "signatures," from the results, so the data was virtually useless to anyone else.
As a result of concerns on both of these points, Goddard and NASA began looking at ways to improve the system. Space physics research results are difficult to distribute in a generic fashion, but several years ago, Goddard's began to archive its astronomy data in wave-length-specific archive centers around the country. Goddard's Space Science Directorate is in charge of the High Energy Astrophysics Science Archival Research Center (HEASARC) that catalogues X-ray and gamma ray astronomy data for users in the general science  community. Results of research in other wavelengths are cataloged in archive centers at other NASA Centers and universities.
Earth science data also began to be catalogued in topic-specific Distributed Active Archive Centers (DAAC) at Goddard and other research centers around the country. Goddard, for example, manages any data on climate, meteorology, or ocean biology. The University of Colorado archives data related to polar oceans and ice.
The philosophy of allowing principal investigators to "own" data has also changed. In space science, the amount of time an investigator is given sole access to data has shortened considerably. With Earth science research, results are now considered essentially public property almost as soon as the data can be verified and interpreted.
With the advent of the Earth Science Enterprise program in the late 1990s, Goddard is entering a new generation of...
...data processing and dissemination. To handle the large quantity of data coming in and make it accessible to the public as quickly as possible, NASA has developed the Earth Observing System Data and Information System (EOSDIS), managed and located at Goddard. EOSDIS processes the data from the satellites and distributes it in various levels of complexity to the different DAACS, who then make it available via not only traditional networks, but also the internet. The goal is to make the science data available and usable to everyone from high school science students to sophisticated research scientists.38
Although few people give it much detailed thought, designing, building, testing, launching and operating satellites, as well as processing and distributing the information they gather, is a very complex and difficult task. There are a million ways something can go wrong and, unlike ground-based research or activities, most problems occur hundreds or thousands of miles away from the engineers who need to fix them.
Tracking and communicating with satellites has always required the cooperation of nations around the world. This fact is even more true today, as more and more satellite projects are developed as cooperative efforts between two or more countries. Every satellite that passes overhead in the night sky is being "flown" and watched after somewhere in the world. Somewhere, someone is telling the satellite which direction to turn next, which...
....instrument to turn on, or perhaps trying to figure out why the power has suddenly dropped in its on-board electrical system.
The efforts of NASA's Mission Control personnel in Houston, Texas are perhaps better known than the efforts of the technicians at the STDN station in Fairbanks, Alaska, the designers of the Cosmic Background Explorer's infrared instruments, or the test engineers who made sure the Explorer satellites worked. But the efforts of these professionals are every bit as important. It takes an army to get a satellite into space - an army of scientists, engineers, technicians and support personnel from industry, universities, NASA, and foreign countries, working together for a common cause.
Space holds fascinating secrets, wonderful mysteries, and the opportunity to look back on ourselves and better understand the planet we call home. But it is a demanding task master, unforgiving of mistakes or neglect. If we have discovered useful, important, or amazing things in our journeys off the planet, it is because there were people like those at Goddard who were willing to attend to the less-glorious but all-critical details - the spacecraft, the launch vehicles, the ground stations, and the information systems - to bring those discoveries within our reach.