The sailing ship evolved during the early years of the Renaissance. At first men were able to navigate rivers and streams with canoes and small craft; later they were able to explore the oceans. Their ships had to be large enough to carry crews for manning sails and covering watches, and supplies to sustain the men during voyages that lasted several months. Staying afloat for long periods of time was not enough; these ships had to be rugged to withstand the gale winds that were sure to come with long exposures at sea and capable of staying underway around the clock. The propulsion technologies for these craft evolved from men at oars, to sails that allowed only downwind motion, and then to sails that allowed tacking in chosen directions. This evolution took place over many hundreds of years, with the greatest advances occurring during the periods when men were motivated to explore Earth.
The ability to withstand the rigors of the seas and to master the winds, while necessarily first, would not have allowed the systematic exploration of the oceans and distant continents had it not been for the development of the compass. The compass, a technological discovery that provided a known direction any place on the seas, was not only a help in its direct navigational capability but surely gave the sailors greater confidence. With this device they not only knew which direction they were going, but could always find their way home.
The invention of the clock made it possible for navigation to become more than just determining direction; position could be known as well. With the combination of the chronometer and the compass it became possible for sailors to determine directions, positions, and rates of speed with the precision necessary to navigate predictably across the oceans.
Even given the ships, the life support for the sailors, the propulsion of the winds, the navigational tools, and other necessary technologies, exploration would not have occurred had additional factors not been at work. A major  motive for exploration was the thirst for knowledge, for riches, for discovering what was there. In addition to these basic human drives, competition with other nations and the prestige of those who were able to sail to the far corners of the Earth and return home with evidence of new conquests and discoveries stimulated this activity. Many of the sailors who went on such voyages did not go voluntarily; zealous leaders were sometimes able to acquire prisoners provided by heads of state and were willing to take such men and launch into unknown regions with the expectation that crews could be whipped into shape while performing the necessary services.
Except for the need to commandeer personnel, space exploration in the 1960s and 1970s required exactly the same basic ingredients. Like the sailors who gazed longingly across the oceans for centuries before the ship technologies evolved, men studied the heavens and dreamed of visiting the Moon and the planets before launch vehicles and spacecraft were feasible. Had the technologies been available to them, they would surely have tried to do the things we have so recently accomplished. Our serious activities in space had gotten underway before he was elected president, but John F. Kennedy was to forever remind us of the similarity between the exploration of Earth and space when he used the haunting words in his inaugural address, "We set sail on this new ocean...." Our blessing is that our generation was privileged to experience that goal.
Invented in China at least as early as the 1300s, rockets have been generally understood for centuries. For most Americans, however, the bombing of London with V-2s brought the shocking realization that rockets could do things we had not believed possible. As an aeronautical engineering student at the time of the first V-2 bombardment of Britain, I was absolutely amazed at the capability of any vehicle to go 3500 miles an hour or as high as 100 miles above Earth. After all, we had been taught that "compressibility effects" at the speed of sound were deterrents to high-speed flight in the atmosphere, and that aircraft would not likely ever fly more than 600 mph because of the so-called "sound barrier" and the heating involved. Yet suddenly, here were vehicles traveling many times that speed and at altitudes far greater than the atmosphere that limited flight from an aeronautical standpoint.
It was not long after the V-2 reports that I learned of the efforts of Robert H. Goddard in the 1920s and 1930s that led to the development of the liquid rocket and the interest of the Germans in the application of this technology. However, it was a while longer before I learned how amazingly simple the  whole idea of a rocket was, for there had been such a preoccupation with aeronautics between World Wars I and II that few engineers and none of my professors had even thought much about them. This was the first of several instances in which I was surprised to discover that existing technologies were simply overlooked for a period of time before engineers began to make good use of them.
A related case is the gas turbine or turbojet. When I was forced to take a thermodynamics course concerned mostly with steam turbines as part of my engineering curriculum, I was distressed because I thought it was a waste of time for an aeronautical engineer to study ground-based power plants. Within 5 years turbojet engines were revolutionizing aeronautics, raising the obvious question, "Why in the world hadn't the steam turbine principles been adapted long before this?" After many years of association with research and development, I have learned how difficult it can be to transfer technology into application; this remains one of the greatest challenges engineers face.
To understand the basis for space exploration, one must start with a recognition of rocket fundamentals. So much has been said about rockets and their use during the last two decades that it is tempting to skip over the subject with a comment like, "As everyone knows, rockets produce thrust by propelling hot gases out the rear of the vehicle." While this is true, how can anything so simple to say be so hard to do that it took centuries to apply? Perhaps a closer examination will show that implementing simple concepts is often a most sophisticated challenge.
For my birthday in 1953 my sister gave me a book by Arthur C. Clarke entitled The Exploration of Space. Clarke did an excellent job of explaining the mysteries of rockets to laypersons. At that time about 100 former German prisoners and a handful of American engineers were already beginning to get serious about developing rockets with capabilities beyond military applications, but my own fascination for space exploration was whetted by this book.
Several pages and sketches were devoted to the rocket principle. A man on a wheeled dolly with a stack of bricks was able to propel himself and the dolly by throwing bricks to the rear one at a time. Assuming the dolly to be rolling on a virtually frictionless surface, expelling the mass of a brick at a certain velocity imparted a reaction to the man, the dolly, and the remaining bricks that was admittedly less than but proportional to the velocity of the brick. From this analogy Clarke showed that it did not matter what  happened to the bricks after they were thrown-their propulsion action occurred as they left the hand of the thrower. The fact that rocket propulsion is completely independent of any external medium, clearly the case for the thrown bricks, has always been one of the hardest things to understand. A second important point Clarke made was that as the pile of bricks on the cart became smaller and the vehicle thus became lighter, the velocity increment provided by each brick increased. This highlights the fact that in addition to the velocity increase, there is an increase in acceleration produced by a rocket as the propellant is expelled and the rocket weight decreases.
Clarke neatly and logically carried the analogy further until it is clear that the final speed is due to the cumulative effect as brick after brick is thrown. Each brick adds a small increment to the velocity that is dependent on the speed at which it is thrown; thus, the final velocity depends on this and on the quantity of bricks thrown out. By using numerical examples, Clarke developed the relationship between the mass and velocity of the bricks and the mass and velocity of the vehicle after all the bricks are thrown, taking into account the fact that the bricks on the dolly must be accelerated along with the man until they are all gone. From this we learn that for the final velocity of the vehicle to equal the thrown velocity of the bricks, the starting weight of the man and the dolly plus bricks must be 1.72 times the final weight after all the bricks are thrown.
But what if we want to go faster than the speed of each brick? Yes, there is an answer for that, too. By using the same relationship, Clarke calculated that we could achieve twice the speed of the bricks by making the load of bricks 6.4 times the final weight of the man plus the weight of the dolly, and 3 times the brick speed if the starting weight is 19 times the ending weight. This tremendous multiplication factor appears to place a sobering limit on the practical application of rocket technology. This in fact was the major deterrent encountered by early engineers; the gravity of Earth is such that no practical rocket could be conceived that would allow a single-stage rocket vehicle to escape from this field.
From the second law of physics expressed by Newton in the form, force = mass x acceleration, the equation for the final velocity that may be imparted by a rocket to a single-stage vehicle may be developed as a simple expression involving rocket exhaust velocity and the beginning and ending masses of the vehicle. Showing the expression in mathematical form helps to understand the key parameters and their simple relationship. For a rocket stage operating in an ideal environment, that is, having no restraining forces  such as drag or gravity, the velocity it can achieve is a function of the exhaust velocity and the natural logarithm of its ratio of gross weight to empty weight is,
where v is the rocket stage velocity, c is the exhaust velocity, In () is the natural logarithm of the mass fraction term, and W is the weight of the stage either loaded with propellant (Wgross), or empty (Wempty).
The theoretical performance of rocket propellants can be defined more usefully in terms of the fuel specific impulse (Isp), where Isp= pounds of thrust per pound of fuel per second. Using this relationship and introducing the gravitational constant, g, for the pull of Earth's gravity produces an expression for what is termed "ideal velocity":
This ideal velocity offers a simple way of comparing the potential of given rocket stages. It is primarily the propellant chemistry that determines the exhaust velocity or fuel specific impulse, although the efficiency of the rocket nozzle is also a factor. The V-2 and early liquid propellant rockets developed in the United States used ethyl alcohol and liquid oxygen as a propellant combination and produced Isp values of about 240 seconds. Later, more energetic jet propulsion fuels were used instead of alcohol, and finally liquid hydrogen and liquid oxygen became standard, providing specific impulse values of about 450 seconds-almost twice those of the V-2.
The ratio of weights or the so-called "mass fraction" is dependent on structural efficiency and fuel-oxidizer densities. The matter of designing lightweight structures had long been a major challenge for aeronautical engineers; dealing with this issue for rockets required greater concern for materials able to withstand high temperatures, but was simply an extension of current thinking. Better cooling techniques, higher pressures, and better propellant pumps have improved the thrust of rockets such that their efficiencies, combined with the mass fractions available using existing materials, have greatly exceeded those of the early rockets.
While ideal velocity is useful for comparing the relative merits of rockets, the actual velocity achievable by a given stage has to account for three principal effects associated with the "real-world environment." These effects can be treated simply as subtractions from ideal velocity. The first is the effect of  gravity; this is typified by the force during launch that is always pulling the rocket toward the center of the Earth; the second is the effect of drag as the vehicle passes through the atmosphere; and the third is due to atmospheric pressures acting on the nozzle to reduce rocket thrust. The last two are factors only during the boost phase in the atmosphere, but gravity effects are always present. In deep space, far from Earth, the Sun may produce the dominant "gravity" force, but such effects must always be reckoned with.
Expressing the burnout velocity for a single-stage rocket in simple terms:
where the dV terms represent incremental subtractions.
On a relative basis, the losses in velocity during a rocket launch to space caused by drag amount to only 5 percent or so of the required velocity. Thrust loss due to nozzle effects accounts for a similar percentage, depending somewhat on the optimization of nozzle design and staging altitude. The biggest losses are due to the pull of gravity; the effect of this reduction for a rocket rising vertically from the surface of Earth is 20 miles per hour for every second of climb-1200 miles per hour for every minute!
The prohibitive size of vehicles having a theoretical capability to escape from Earth led to the concept of staging. The idea was simply to stack two or more rockets so that the upper ones were treated as payload for the lower ones, with the advantage that the heavy structure of a lower stage could be discarded after fuel was expended and it had served its purpose. By starting over with a smaller rocket having an initial velocity equal to the final value for the previous stage, dead weight was carried no longer than necessary.
Reducing the weight of rocket vehicles offers such gains that many changes in the design of structures evolved from the baseline aircraft technologies. For example, pressurized stainless steel tanks with very thin walls were used on Atlas and other missiles. Like a balloon, these structures were stiff under pressure, but during their manufacture and handling they had to have hardback supports to maintain shape. The fact that empty weight is so important to rocket efficiency is still one of the principal reasons that rocket vehicles seem to operate on the ragged edge of failure. The luxury of large structural margins and redundant systems simply cannot be afforded if payload capability is maximized.
The notion of staging is taken for granted today; however, long after engineers began considering the matter of staging there was controversy  about whether rocket staging would actually enable us to launch a meaningful payload into deep space. There are complications, of course; staging operations have often resulted in the frustrations of launch failures, but staging is now accepted as a way of life.
An associated problem for the early rocket missiles was that of warheads reentering the atmosphere at high speeds. The friction of the air at high speeds caused such severe heating that ordinary metal would simply burn in the atmosphere. The development of blunt entry shapes and ablative cooling techniques made this seemingly impossible requirement achievable.
Returning to the analogy of the ship and the rocket vehicle, it is appropriate to relate the two technologically as capable of carrying payloads for long distances. It is also appropriate to liken the compass, chronometer, and sextant that were required for successful voyages by ship to the guidance technologies required for predictable navigation in space. Attitude stabilization, always a requirement for celestial navigation, is a condition not achieved as readily in weightless space as on the seas. Not only was the attitude control of a spacecraft necessary for supporting celestial navigation, but in the same manner that the ship had to be pointed to take advantage of the wind and to make good the course desired, the spacecraft needed a stabilized platform to orient rockets for course corrections. Attitude stabilization was also needed for orientation of solar panels toward the Sun, for orientation of the high-gain antenna providing reception and transmission of low-power signals, and for pointing instrument sensors that were to serve as the eyes and ears of the spacecraft. An attitude control system, combined with the space equivalent of the chronometer and sextant, made it possible to determine the positions and trajectories of missiles. Doppler radar tracking systems became a better choice for tracking and guiding space missions, because transponders in the spacecraft were simpler and lighter than onboard position determination systems.
With continuous knowledge of position and some ability to control steering or midcourse rockets, the integration of the trajectory parameters could be achieved with the help of computers to keep track of position information. For early spacecraft it was better to have this integration of guidance information occur on the ground, because it was possible to use large, powerful computers that could not be carried into space to perform this function. Indeed, early tradeoff studies showed clearly that everything possible should be accomplished with equipment on the ground to save all the precious weight aboard the spacecraft for necessary components. Of course, this  made the telecommunications link for commands critical; perhaps this departs from the analogy between the exploratory spacecraft and the early ocean explorers, who were completely out of touch with the world once they left the shore. Fortunately, telecommunications evolved along with rocket guidance and control technologies, such that radio transmissions over large distances were possible with low power.
In addition to these basic technologies essential to space exploration, the stimulus and motivation of man was required. In reviewing history, it seems that the technologies were often ready before this motivation occurred. Certainly in recent years this has been the case. While it is hard to say what actually started the space "snowball" rolling, the Russian plan to launch Sputnik into orbit clearly galvanized this country into action in the late 1950s. Not only did it force an appraisal of the state of technology, it also caused a coordinated look by American politicians, industrialists, and researchers at what the United States should do to achieve preeminence in space. Simply put, we entered a space race we perceived to be important based on the Russians' plans to launch a satellite into Earth orbit.
It did not take long to realize that we had the technologies in hand to begin such an effort. The books of Arthur Clarke and other science fiction writers started us thinking. Wernher von Braun wrote a series of articles for The Saturday Evening Post in which he described the various aspects of rocket propulsion and related technologies, and what could be done to put them together in a logical fashion for the exploration of space. His articles were based on sound engineering principles studied over the years, plus his strong belief that it was time to combine these technologies and to do some of the things that he saw were possible. His articles were timely and helped to convince a large segment of the population that such feats were not only possible, but that it was time to proceed.
Our defeat of the Germans and the spoils of war had left the United States with several partially completed V-2 vehicles and a large amount of information on the design and development of rockets. In 1948, I was working at North American Aviation in an engineering department concerned with trainers' fighters, and bombers when an opportunity arose for me to join a newly formed Aerophysics Department that had been assigned special studies of the V-2 technologies and their potential. Dale D. Myers, an aerodynamicist who had joined the aerophysics organization to head the new missile aerodynamics activities, offered me a chance to work in this new field. It was several months before I was able to complete an ongoing assignment  and transfer, but early in 1949 I entered the fascinating world of missiles. My association with Dale was to continue on and off over many years, for he later became Apollo Program Manager for Rockwell International, and in 1970 we were reunited at NASA Headquarters when he became Associate Administrator for Manned Space Flight. Throughout my career his influence has been inspiring to me, for he always seemed to possess a rare combination of experience and insight needed to guide new technical efforts, and a gift for leading a team and getting the most from it.
Shortly after my transfer to the Aerophysics Department, work began on the design of a rocket-launched, winged cruise vehicle using a V-2 as the basis. Confiscated German rocket engines were available for tests, and facilities were built to improve them as preliminary design activities began. Needless to say, many "bootstrapping" studies were initiated, as we had to develop our own data base for dealing with supersonic flight, thermodynamic heating, and the new propulsion technologies.
A major program called Navaho evolved in 1951 from these early V-2 follow-on developments. It was defined as a rocket-launcher, ramjet cruise missile combination, to be ultimately capable of flying 5500 nautical miles. Simply stated, the Navaho program objective was:
At the time this challenging objective was formulated, there was less reason for optimism than when the similarly simple Apollo objective of sending men to the Moon was pronounced.
Navaho development was planned to be carried out in three phases. First, a prototype cruise missile powered by two large turbojets was to test the aerodynamics and flight operations. Following about a year later was to be a rocket booster/ramjet cruise missile combination capable of flights of about 2500 nautical miles to test the launch concept and the ramjet propulsion systems. The third phase was to be the operationally suitable weapon system with complete capability.
Requirements for this missile program included the development of the rockets, ramjets, structures, propellants, and tankage, as well as the high-technology guidance and control systems. Also included were the procedures  for rocket launchings plus the development of launch facilities, telemetry, and tracking needed to accomplish the tests and operational checkout of all systems. During the next o years, a remarkable amount of progress was made toward these simply stated, but hard to achieve, objectives. Twenty-seven flights were made with the turbojet-powered vehicle designated the X-10, including supersonic flights to Mach 2.0. These did much to further the guided missile technologies for disciplines other than rocketry.
The rocket-launched, ramjet-powered Navahos were for many years the most impressive missiles to be seen at the Cape. Standing about 100 feet tall and weighing approximately 300 000 pounds, their 405 000-pound-thrust boosters were the most powerful in existence. Although they did not fare as well in flight as the horizontal takeoff, turbojet-powered X-10, many lessons were learned about the vagaries of rocket launches. When the base program and a flight extension finally concluded, nine rocket launches had been made, three of them followed by successful ramjet flight operations. The success ratio was not impressive, but after more experience with rocket launches, the record did not look as bad as it had seemed at first.
The now familiar concept of the launch complex with its distinctive gantry and blockhouse was not initially obvious for missile launches. Orion Reed was at the Cape from 1951 to the present, and as the base manager for North American during the Navaho flight test program he was involved in all the debates over how to provide for test operations, with due consideration for crew safety. He recalled that the one way to ensure safety from possible explosions was with separation distance; the price paid for this simplistic solution was long communications and data lines, plus inconvenience for access to the pad and for observing the equipment and operations. Television systems were not developed enough for widespread use in 1951, and it was essential to have direct viewing and ready access to the launch pad during the countdown.
The compromise struck for the Navaho launch site resulted in a small, hemispherical blockhouse built of sandbags and concrete that would house a few critical personnel during the final count and launch operation. Others were separated from the site with the long communications lines and compromised view of the launch. The small shelter was closer to the launch pad than the distance later chosen as a standard. The Air Force development of the pads, including 12 and 13, used for all the lunar and planetary launches on Atlas/Agena vehicles, were based on more detailed studies and were  much more rugged. These blockhouses had roofs made of 20-foot-thick concrete, several feet of sand, with more concrete on top. They could house 50 to 100 persons and allowed indirect viewing of the launch site by periscope and closed-loop television.
Reed vividly remembered the blowup of an Atlas/Able vehicle during a static firing on September 25, 1959, shortly before the planned launch. The explosion and rain of debris was very close to the blockhouse, and the launch crew was grateful for the protection provided, since the pad was pretty much destroyed. Throughout his launch operations career Reed spent many days and nights in blockhouses (the 200th Atlas launched Ranger ~ in January 1964, and he was involved in most of the space missions launched by Atlases). He played a major role in helping to mature countdown and launch operations into a science.
The manner in which man and automated systems can work in partnership is illustrated by a solution to the problem of accurately guiding a Navaho test missile during approach and landing. In conjunction with an autopilot and inertial guidance system, a radar altimeter was used to flare the missile as it approached the runway on a predetermined glide slope. The accuracy of this automatic flare system was suitable for closed-loop operation, but the autopilot and digital navigation sensors available at the time were not capable of laterally aligning the missile flight path with the relatively narrow runway.
Orion Reed recalled that to achieve the necessary directional control for the touchdown and rollout phases, a simple optical tracking instrument was positioned at the far end of the runway with a means of generating an error signal that could be transferred by radio command to the missile autopilot. To operate the device, a man peering through the telescope kept crosshairs aligned on the nose of the missile, and the lateral deviation error signals were fed back to the missile autopilot for making the necessary heading corrections.
The system was used satisfactorily during flight tests. However, the optical device became affectionately known as a "hero" scope after a braking parachute failure allowed the missile to continue down the runway toward the hapless controller who was staring it in the eye as he guided it directly toward himself. A disaster was narrowly averted, but it became obvious that the person closing the lateral control loop was in jeopardy if landing overshoot occurred. For his benefit, a trench was dug near the instrument so that he could dive into it for protection if the need arose again.
 At the time of the Navaho developments in the early 1950s, it did not appear that guidance and control systems using inertial platforms could perform on long-range missions without frequent updates. A system called the "stellar supervised platform" was developed to ensure that the drift of gyros was corrected by frequent star sighting inputs from the equivalent of a sextant used during ocean voyages. Concurrently, improvements were made in the performance of gyro systems, double integrating accelerometer systems, and other elements of basic attitude reference platforms.
As a result of this concentrated effort on guidance and control technologies, the capability needed for accurate intercontinental ballistic missile guidance systems became available. Ironically, the General Dynamics Corporation capitalized on these advances and began promoting the intercontinental ballistic missile (ICBM) concept called Atlas in basic competition with the Navaho. Thus the Atlas missile, a 1 1/2-stage vehicle combining rocket engines, guidance and control concepts, pressurized stainless steel tanks, and other technologies developed at North American for the Navaho program, eventually became the promise that resulted in the demise of Navaho in 1957.
A host of developments that evolved during the Navaho program also provided a technology base and played major roles in space exploration. North American extended V-2 rocket engine technologies, and the Rocketdyne Division later produced rockets for the Thor, Jupiter, Atlas, and Saturn vehicles, for the Apollo spacecraft, and for the Space Shuttle. An Autonetics Division provided guidance and control capabilities, becoming heavily involved in upgrading the guidance and navigation technologies for ICBMs and space vehicles. The Missile Development Division developed aerothermodynamics concepts, structure capabilities using high-temperature materials such as stainless steel and titanium, design and manufacturing technologies such as diffusion bonding and chem-milling, and a well-trained cadre of engineering and management talent that designed and produced the Saturn launch vehicles, Apollo spacecraft, and Space Shuttle vehicles.
The cancellation of the Navaho program and the successful orbiting of Sputnik 1 were, to my career, closely coupled shocks, like the double impact of a sonic boom For the engineers who were laid off because of the Navaho cancellation' and for those of us with supervisory responsibilities who had to decide which friends and associates would stay or go, it was a dismal period. I was very glad when the chore was over and those of us remaining were assigned to study the use of Navaho technologies for space missions.
 Although many were applicable, it was the rocket boosters that gave us the immediate capability to prepare for traveling into space.
Even with the significant developments spawned by Navaho, there was one major shortfall in our ability to design space missions. Our missiles had all been governed by flight in the atmosphere, where aerodynamics was the dominant discipline. One might think that going into airless space, where the principal forces are caused by the gravitational attraction of bodies should have been simpler, but we did not have in hand the parameters defining gravitational forces needed to determine space trajectories. We also lacked the basic equations and the programming to integrate trajectories; besides that, our computer was very large in size and very small in capability.
Encouraged by my superiors to find help, I visited Professor Seth Nicholson of CalTech and also discussed our needs with astronomers at the Mount Wilson and Palomar Observatories. While doing this, I met a retired astronomer, G. M. Bower, who had calculated the mass and orbit of Pluto Although he had done most of his integrations using mechanical calculators Bower had some recent experience adapting the equations to computer language. He joined our small group and helped generate the so-called "n-body" equations used in computer computations of space trajectories.
Further evidence of our lack of capability to compute trajectories became painfully obvious when I contacted G. M. Clemmence, then head of the Naval Observatory, to obtain ephemerides for the Moon, Mars, and Venus. Values of the orbital parameters for these bodies were essential to navigation, and the Naval Observatory was the central repository for such information. Clemmence surprised me when he stated that he could provide computer input data suitable for computing trajectories to Mars and Venus, but that data were not available for developing trajectories to the Moon. The reason given was that many variables, such as Earth's tides, affected the Moon's orbital path, so that it was not easy to exactly predict long-term values for the Moon's whereabouts. This revelation begged the obvious question: if we don't know where the Moon is going to be when we launch, how can we determine in advance how to get there?
This kind of activity was not unique to North American. All over the country aero industry teams and research groups were doing the same things, with perhaps one of the most notable efforts led by C.R. "Johnny" Gates at JPL. He and a small group in the Systems Division developed integration methods, adapted them to computer operations, and soon began to  put their knowledge to work on real projects. Within a brief period, nearly everyone learned how to compute space trajectories. A new discipline, called astrodynamics, sprang out of the combined aerodynamics and celestial mechanics backgrounds of aeronautical engineers and astronomers. Computer technologies were also driven hard by the obvious need for larger matrices and faster operations.
Once staging concepts began to be generally better understood, the multiplication factors for upper stages were simply reduced to engineering terms. Initially, staging had been thought of as launching one rocket with one or two others on top, to be ignited sequentially as soon as the prior stage had burned out. Now coast periods between firings were being used to provide more control over trajectories. Coast periods between stages could be accomplished with no thrust at high altitudes; the fact that the stages remained connected had no significant impact on performance when the vehicle was not in the atmosphere and being affected by drag.
For example, in the case of synchronous satellites, which had to orbit at 23 000 miles above Earth, coasting up to synchronous altitude and then firing the last stage to produce the right velocity for staying at that altitude allowed the satellite to be put into a precise circular orbit. Coasting trajectories became known as "parking" orbits and were frequently used to launch vehicles into deep space from a position other than the launch site. Such staging considerations offered many possibilities for tradeoffs; the precision timing required for leaving the launch pad was reduced, because it was possible for the vehicle to coast part way around Earth-or even to make an orbit or more-before the next stage was fired.
The payloads for early missiles were warheads, which usually were inert until carried to the target site. Thus, from the standpoint of integration with the vehicle, they merely weighed so much, were so big, and otherwise had a modest interaction with the design of the vehicle. As the aerodynamic shape of the warheads was especially critical to the reentry thermodynamics of missiles, much work was done in the development of ICBMs to solve the thermal and aerodynamic problems of reentry. In addition, knowledge of high-temperature phenomena was needed, and materials able to withstand high temperatures had to be developed. These were especially important for missions that required reentry to Earth's surface or entry into the atmosphere of Mars or other planets. Of course, we were not concerned about entry aerodynamics for the first planetary exploring machines because they were  merely one-way vehicles that flew by or in the vicinity of planets. Nor did we need such technology for spacecraft designed for landing on the Moon, where there is no atmosphere.
One of the dramatic changes during the early years of transition from missile launches to space launches was caused by the change from passive warheads or payloads having very few active elements to what became known as spacecraft, which were vehicles in their own right. The effects of this transition became painfully evident during the early space launches at Cape Canaveral: those with experience launching missiles tended to think of the principal process as readying the rocket vehicles, launching them, and tracking them into space into prescribed orbits. After a few missions in which the launch was over within minutes and spacecraft operations became long-term tasks, the realization dawned that what had once been prime aspects of missilery were now relegated to support roles. Certainly, launch vehicles and launch operations were no less important; but now, launching the spacecraft at the proper time into the proper orbit merely allowed the spacecraft to get on with the real job of exploring. There is no obvious analogy to this with the early days of ocean exploration, since the beloved ships were "single-stage vehicles" that not only carried the explorers from shore to shore, but also brought them home. Rocket launches, even for boosters employing three stages, are over quickly relative to the long journeys of spacecraft; after launch they simply become "spent vehicles" that serve no further purpose.
As spacecraft became more than inert payloads, further evolution of rocket vehicle technology was required. Propulsion systems now had to be stored in space for long periods of time and operated remotely after exposure to the vacuum and thermal radiation of the space environment. Attitude control systems for launch vehicles had to work for only a few minutes; thus the drift rates and wear problems associated with short-lived missiles were completely different from those expected of spacecraft, which had to spend months in orbit. The guidance and control systems necessary for accurate midcourse corrections, terminal maneuvers, and other functions required precision and updating of position so that after months in orbit or interplanetary space, exact pointing of the rocket motor or aiming of the instruments would be possible. While the basic technologies were similar to those required for launch vehicles, the demands for precision, for miniaturization, and for long-life operation in a somewhat hostile environment were greater.
 Vehicles designed to land on the Moon or the planets required retro rockets to decelerate the spacecraft for landing. In principle, retro rockets changed the spacecraft velocity in the same way as booster rockets launched from Earth, except that the velocity increments they provided were used to reduce the velocity from high beginning values to zero at the point of touchdown. This led to the use of an analogy as a basis for determining the probability of mission success in planning considerations for lunar landing spacecraft such as Surveyor. A study I made in 1962 devoted considerable thought to this matter and resulted in a paper, replete with statistical probability curves, entitled "Probable Returns of Present Lunar Programs." Because this analysis offered significant possibilities for misuse by critics, it was stamped For Office Use Only and had a limited distribution. While soundly based on existing launch vehicle statistics, the probabilities of success using statistical data available from launch vehicle experience showed that less than one out of three flights aimed toward landing on the Moon could be expected to be successful.
The lunar landing/launch vehicle analogy became useful for illustrating the combination of technologies involved and the engineering challenges that had to be addressed for such missions. Actually the landing is the reverse of the launch in sequence, but a surprising number of the steps are analogous. The simple diagram from the 1962 study is reproduced here along with an explanation of its meaning.
A launch operation starts with a zero velocity as the vehicle is sitting on the pad. At the end of the launch, injection into orbit allows for some variation in the firing accuracy from the early stages; adjustments by a vernier engine make up for any deficits or excesses in velocity, orientation, or position in space. In contrast, the lunar landing vehicle begins its terminal maneuver with some finite but uncertain velocity and must arrive at the surface with zero velocity after a 240 000-mile trip taking some 90 hours. The landing would not be successful with any sizable horizontal or vertical velocity components at the point of touchdown, for the spacecraft would either tip over and be useless or be destroyed by the crash.
Other similarities and differences are highlighted by comparisons in the simple diagram. There are several more steps involved in a spacecraft landing mission, not to mention the fact that a landing attempt is not even a possibility until after a successful launch has been achieved.
Following the separation of the spacecraft from the upper stage of the launch vehicle, attitude orientation is needed to point the solar panels  toward the Sun and the antenna toward Earth. This is the cruise mode, which continues until the time of midcourse correction. The velocity change for the midcourse correction is determined using computers on the ground, and commands are loaded into the spacecraft to properly orient the thrust axis and to fire the motor for the time needed to make the desired trajectory correction. This series of maneuvers in the midcourse involves some risk, because it is necessary to turn the craft away from the Sun-Earth orientation to a given inertial attitude, to fire the vernier rockets for a fixed amount of time, and then to shut off the rockets and return the craft to the cruise attitude, again acquiring the Sun and Earth.
As the spacecraft approaches the Moon at a given height above the surface (this must be fairly accurately determined by a triggering radar), the large retro motor must be ignited. For Surveyor, a highly refined solid rocket motor of spherical shape was used. When it was built, it had the highest performance in terms of mass ratio and specific impulse of any solid rocket in existence-Surveyor was its first space application and true test. After the burnout of this motor, it was essential that the spent rocket be separated and that staging occur in a manner that did not tip the spacecraft or cause it to lose attitude control. The retro motor then fell to the Moon, ahead of the spacecraft, while the vernier engines on the spacecraft slowed it to further reduce the velocity of approach.
A closed-loop radar system was used to guide the spacecraft down to the surface. Engineering for this system presented challenging difficulties, partly because we lacked detailed information about the surface of the Moon; thus, its radar-reflective properties were only speculated on the basis of engineering models. Another unknown at the time was the interaction of the radar system and the tenuous atmosphere created by rocket exhaust, possibly causing undesirable radar dynamics. There simply was no good way of testing these environmental combinations prior to the first Surveyor mission.
The vernier rockets used to reduce the remaining velocity of the spacecraft to near zero at touchdown were throttleable liquid propellant engines. Throttling was not, at the time of Surveyor, a common practice on liquid rockets; this vernier system was specially developed. Determining the vertical approach velocity with radar seemed relatively straightforward; however, determining the horizontal velocity component, which was just as important, was not so easy. With the small radar baseline on the spacecraft it was not possible to track the horizontal velocity until the craft was very close to the Moon. This meant that the buildup of horizontal velocity during the...
 ....retro motor burn, separation, and approach had to remain within finite limits in order for the lateral correction capability of the swiveling vernier rockets to suffice. Taking into account all these functions required of a lunar landing system, the probabilities of success were calculated based on existing launch vehicle statistics. These indicated that the landing itself was a fairly risky proposition, with a probability of success far less than 50 percent.
As mentioned earlier, a concept accepted at the time was that launch vehicles undergoing development should experience 10 development flights before being used to carry out operational missions. Had we applied this concept to Surveyor landing systems and taken the 40- to 60-percent reliability of existing launch vehicles into account, it would obviously have taken many launches to enable us to develop and check out a suitable Surveyor lander. These discouraging statistics increased our concern for the thoroughness required during Surveyor research and development activities, but they were helpful to the planning process within the program office and resulted in the adoption of the multiple-spacecraft block concept, calling for a minimum of three-of-a-kind for most configurations. The idea worked, except for the second block of Rangers, which failed to achieve a single success.
Perhaps the greatest driving force in the evolution of spacecraft was the challenge of providing "long-life" capabilities for systems that had to operate in a hostile environment with minimum human interaction, and that required very modest amounts of weight and power. After seeing a number of different spacecraft concepts developed, I have come to the conclusion that having to design and build to severe constraints actually improves the evolution process. When engineers have ample amounts of weight, power, and other resources to start with, they almost immediately expand their desires to exceed those capabilities and develop self-imposed problems from trying to juggle all the "what ifs" and "druthers" into something real. They usually make much more work for themselves, and in many cases design less suitable systems as a result. On the other hand, I have seen designs evolve when severe constraints were imposed, requiring single-purpose objectives and simple, direct applications of basic physics, that resulted in the most clever advances in technology to perform the necessary functions.
As the technologies employed in launch vehicles and spacecraft have become more complex, the number of engineering man-hours involved in design and development have increased. I asked Dale Myers to discuss the changes he had seen in the process and the reasons for them. He did not give a pat answer, but offered observations from his own career to support the  changes in engineering effort versus production rates. When he started at North American in 1943, the factory was producing 25 Mustang fighter airplanes a day. In the early 1960s he was in charge of the Hound Dog missile program, which produced 20 missiles a month. During the final period of the production of Apollo Command and Service Modules, the production rate was 6 per year, and when he was responsible for the B-1 bomber effort in 1974, the bombers were being produced at a rate of 1 every 2 years. The escalation of effort has also severely increased the cost per pound of hardware, making the problems of estimating program costs much harder for space vehicle planners. However, that is another story.
No matter whether you are a launch vehicle proponent or a spacecraft engineer, it is obvious that rockets have provided the key to the exploration and exploitation of space. At times it appears that we have lost sight of their importance in the scheme of things; we seem to be complacent about the potential gains that might accrue from continued emphasis on their improvement. Far-out concepts for doubling or tripling their efficiencies are barely being researched, if considered at all. Are we once again experiencing that lag in the engineering advances of existing technologies until necessity, not opportunity, becomes the "mother of invention"? Time will tell.