SP-4102 Managing NASA in the Apollo Era

Chapter 6

Program Planning and Authorization



[141] Be it Government agency or private firm, every large organization must be able to plan in order to act. Planning implies that there is an authority to decide what is to be done, how, by whom, and over what period of time. Confusion easily results from a failure to specify the sort of planning that is being discussed and whether it is for the short-term, intermediate, or long-term future. The annual budget submission of a Federal agency is a short-term planning exercise of sorts, since it involves projecting the agency's needs against what the Office of Management and Budget and Congress are likely to authorize. At the other extreme are those grandiose ten-to-twenty-year projections of what an agency or corporation might undertake if and when resources become available.

This chapter is about how NASA planned and authorized its intermediate range programs (the logical, not necessarily the chronological sequence): the missions that were flown, the systems that were developed, and the aeronautical research concepts that were proved by test models. These were the programs with lead times of five to seven years, most of which were conceived between 1958 and 1961 and which were accomplished during the following decade. The emphasis then is less on review than on program approval, although it must be added that any distinction between planning, approval, and management review is inherently artificial, since all are part of a single process. The same procurement plan that was the basis of a request for proposal also represented a step in project definition. Moreover, planning was seldom complete at any stage in the life of a project up to actual hardware development, if then. Major research and development projects were always liable to change; examples include weight reductions in Surveyor, elimination of the Gemini paraglider, postfire modifications of the Apollo spacecraft, and extensions in the firing time of the J-2 engine. The separation of predevelopment from development planning in this chapter is mostly one of convenience.

[142] The thesis of this chapter is that NASA had many of the prerequisites for successful intermediate-range planning. Planning for the medium term is coterminous with the earlier stages of the NASA "programming" function: the process of formulating proposed missions; reviewing and approving such proposals and committing the resources to implement them; monitoring progress; and readjusting goals, missions, and resource allocations. Why and how was NASA able to develop a medium-range programming structure? First, NASA planning was emphatically not a series of forecasts; it was a course of action, an attempt to make things happen, almost after the fashion of business planning. NASA plans served as bases for current decisions. Second, at least one program office during the 1960s-the Office of Advanced Research and Technology-engaged in defensive research, which aimed not at a breakthrough but at staying abreast of the latest technology. NASA management felt it disadvantageous to link research too closely to current operations. As James Webb explained to one Congressman, ". . . it is definitely not our policy to demand a mission requirement as justification for the expenditure of development funds.... If we are to feel sufficiently free to initiate this kind of program in the first place, we must not expect each development to find a mission use, nor restrict ourselves by a policy that would require every program to be carried to a full demonstration."1 In this view, defensive research was amply justified if it highlighted possibilities for future projects, if it identified a new spectrum of technologies, if it fed into existing programs as supporting research and technology, or if it assisted agencies like the Federal Aviation Agency m meeting requirements-for example, in understanding the physical phenomena associated with sonic boom and turbine noise. The Office of Advanced Research and Technology was effectively charged with preparing a shelf of research programs, only some of which would lead to improved flight hardware. The others would define the state of the art against which future missions would press.

Third, the NASA organization had one division during the mid-1960s, the Office of Programming, that could provide management with independent technical advice. The importance of this small office was out of proportion to its size, owing to the combination of fiscal and technical review in the same office until the 1967 reorganization and to the expertise of the staff, many of whom (like DeMarquis Wyatt, William Fleming, and Bernard Maggin) had worked in Abe Silverstein's Office of Space Flight Programs before transferring to Deputy Administrator Seamans' staff. Until 1967 the Office of Programming was probably the closest thing NASA had to a central planning staff. By its functions of review, project authorization, and development of cost models for flight programs, it did much to ensure that NASA had a single, coherent program.

Fourth, planning was made easier because the main bottlenecks were only technical and fiscal. The problems that dogged Apollo and Gemini were principally of this kind: Has it been properly tested? Will it fly? How can the spacecraft be integrated with the launch vehicle? Indeed, most NASA programs in the 1960s were undertaken when circumstances had made them technically ripe and when the principal constraints were either the higher priority of other [143] programs (say, Apollo in relation to the Orbiting Observatories) or lack of funding. Thus, NASA's planning problems were minimized considerably by the lack of institutional constraints. But it is in applications and, above all, in aeronautics that such constraints made themselves felt. NASA's aeronautical programs have usually been planned to support R&D in the civil aviation industry and in the Department of Transportation (DOT). And the problems in aeronautical R&D are not primarily technological at all. As a joint DOT-NASA study noted, "Technological advances are subject to a variety of institutional constraints which can be categorized as regulatory and legal, market and financial, attitudinal and social and organizational."2 Problems such as aircraft noise, congestion in and around airports, and the feasibility of low-density, short-haul service cannot be treated solely as technological limitations, as NASA managers treated their space programs.

Finally, the organization of the program offices (and to a smaller extent, the centers) explicitly recognized the interdependence of NASA programming. The 1961 and 1963 reorganizations attempted to group the centers in related fashion. An obvious example of the connectedness of NASA programming has been the authorization and construction of facilities in advance of the programs they were intended to support. Consider, for example, the role of the Office of Tracking and Data Acquisition, whose mission was almost entirely one of supporting the other program offices. Tracking stations had to be built; radio antennas had to be designed to cope with the extremely weak signals transmitted by deep-space probes; continuous coverage had to be available for spacecraft in highly elliptical and synchronous orbits; and some means had to be devised to handle the ever increasing rates of data transmitted by such advanced spacecraft as Mariner 9, Viking, and eventually, the space shuttle. These capabilities had to be available when needed, and it is a tribute to NASA programming that they were.

Equally important to the success of NASA planning, the agency had to create mechanisms for "cross-servicing," by which a center reporting to one program office could work for another.* This presupposed that some centers already had a high proportion of "institutional" facilities, like the wind tunnels at Langley and Ames and the data processing equipment at Goddard, that could be used by more than one center (or agency) for more than one program. Cross-servicing cut across but did not negate the equally important concept that each project, except for Apollo, should be lodged in a lead center with responsibility for overall coordination. The official position was that each center had its mission within the total NASA mission and that the agency's objectives were "not amenable to clear and easy separation one from the other.... the view that the agency program and . . . resources are each to be managed in total provides significantly greater flexibility to . . . agency management than would otherwise be the case."3

[144] These elements of the NASA organization-especially the establishment of a central office for technical review, the interdependence of agency programs, the long lead times characteristic of R&D missions, the relative lack of nontechnical constraints-encouraged the agency to plan. But they neither forced nor dictated the actual structure of NASA programming.** The remainder of this chapter examines the NASA program planning structure, first by considering the agency-wide guidelines for review and authorization, and then by taking a closer look at the planning philosophies of the four program offices: Tracking and Data Acquisition (OTDA), Advanced Research and Technology (OART), Space Science and Applications (OSSA), and Manned Space Flight (OMSF).




Chapter 3 includes an account of the Office of Programming that examines the need for an independent staff arm in the Associate Administrator's office; the studies of February-April 1961 conducted by Young, Siepert, and Hodgson; the establishment of an Office of Programs under Wyatt before the November reorganization; and the subsequent creation of a Planning Review Panel to coordinate the agency's advanced studies programs. This chapter offers a fuller account of Wyatt's office in operation, what it did, and how it did it. The number of functions within the office kept shifting; some, like responsibility for publishing the material presented at program reviews or for coordinating facilities planning, were transferred elsewhere." But three divisions in particular were the core of the NASA programming function: Resources Analysis, Budget Operations, and Program Review. The first was responsible for "the overall review and assessment of the planned and actual utilization of all resources available to the Agency and for the development . . . of improved . . . evaluation-validation techniques for all NASA appropriation categories." The Budget Operations Division implemented all programming decisions approved by the Associate Administrator and was charged with submitting all budgetary data to the Bureau of the Budget and the Congress.4

But it is the Program Review Division from 1961-1967 that is of chief concern. Director William Fleming was one of the key officials in the authorization process. As head of the Planning Review Panel, he coordinated advanced studies with the program offices; as a representative to the Aeronautics and Astronautics Coordinating Board, he coordinated NASA facilities planning with that of DOD; and as head of Program Review, he reviewed and approved project proposals under all three appropriation accounts (research and development, administrative operations, construction of facilities) before sending them to [145] Seamans. Once signed, each proposal became a project approval document (PAD), which authorized the program office and its field installations to proceed and to let contracts; the PAD approved, in principle, the scope of the project and the means for getting the work done.

Only in the last analysis was the authority vested in Wyatt's office the authority to say no. Typically, a proposal would be revised, modified, and discussed until something acceptable to both sides emerged. Fleming's division had to be an independent source of technical advice to Seamans, while maintaining the confidence of the program offices in its objectivity and technical competence. Furthermore, the kind of review conducted in Fleming's office was not entirely, or even mainly, concerned with the technical soundness of proposals. It could be assumed, for one thing, that the program offices knew their business. Instead, the Program Review staff asked such questions as the following: How does this fit in with the agency program? Does it duplicate facilities? Are schedules realistic? Can the cost estimates for the project be validated? Was a "Huntsville proposal" a proposal of the Marshall center or "a Huntsville contractor proposal that had flowed through" Marshall? 5

Let us examine two cases of the review process in action: the first involved a proposal to build a fluid mechanics laboratory at Marshall; the second, a proposal to continue funding a space science data center at Goddard. In the summer of 1964 Marshall requested that its proposal for a fluid mechanics laboratory, already refused by Seamans, be included in the FY 1966 budget. 6 After a thorough review, Fleming approved the proposal.*** He began by assuming three possible approaches to the request. Marshall could continue its arrangement of testing hardware in Lewis facilities; it could use the nearby Air Force facilities of the Arnold Engineering Development Center, with the Lewis facilities to be used only for final configuration testing; or it could develop test facilities at a new laboratory, with Lewis' supersonic wind tunnels again to be used only in the final testing phase. In a sense the decision to be made was technical; a decision to proceed would represent Seamans' judgment that a new facility would not duplicate any other facility in NASA or DOD. But it was precisely for this reason that Marshall's first request had been denied: The Air Force had claimed an existing capability at the Arnold Engineering Development Center. If NASA proceeded to authorize the Marshall laboratory, Seamans and Fleming would be faced with reversing their original decision and justifying the warranted duplication of an existing facility. Just as important, the laboratory, if approved, could not possibly be built in time to support the early Saturn IB and Saturn V flights, which would have to be supported by facilities already in existence. Thus Marshall's proposal was twice vulnerable: Not only would it duplicate an existing facility, it would not be ready when needed.

[146] Despite such persuasive reasons for refusing Marshall's request, Fleming argued that the justification for proceeding was even stronger. By authorizing a fluid mechanics laboratory, headquarters would give Marshall the same in-house competence in fluid mechanics as it already had in guidance and control systems or in static testing engines and stages. Moreover, the desired competence in fluid mechanics could be used to round out the center's launch vehicle development capability. Fleming conceded that such a laboratory would not be ready before 1969 at the earliest. But this, he argued, was precisely when the center would be ready for a new launch vehicle development assignment. And unless construction began well in advance of the assignment, it would be unavailable "for the preliminary fluid mechanics test which plays an important role in the early design and development phase of a launch vehicle stage." Once the proposed facility had been completed, it might also reduce the amount of testing that Marshall required in the Lewis supersonic wind tunnels. The decision to approve the Marshall proposal was not made on narrowly technical grounds but rather on a consideration of NASA-DOD relations, the need to strengthen in-house competence of a major development center, and a review of future programs for which the facility might be needed.

Similar considerations were involved in the decision to continue work on the National Space Science Data Center, established at Goddard in April 1964 to collect and maintain an inventory of data from sounding rockets and spacecraft. Unlike the Marshall proposal, the problem was not whether to approve the concept in principle-that was already settled-but how to bring it in line with other NASA policies. In November 1965 Fleming, after reviewing the project approval document submitted by OSSA for continued funding of the data center, recommended that Seamans sign it.7 Again, there were considerations other than the technical feasibility of the data center concept itself. In his covering letter of approval, Seamans observed that "the ultimate development of a NASA Space Science Data Center has far reaching implications which become deeply involved with agency policy."8 From this letter and Fleming's staff paper on the subject, it becomes apparent that these implications were broadly political. NASA had to coordinate its policies for data exchange with those agencies that would prove to be heavy users: DOD, the Commerce Department, the National Science Foundation, and the National Academy of Sciences. Within NASA, the program offices would have to consult with each other. OSSA, the manager of the center, would have to consult with OMSF on the data obtained from manned flight and justify to OART the absence of facilities for storing and disseminating OART data. OSSA would also have to prepare a plan for development and funding of the center itself: the estimated workload from all NASA installations and other agencies, the options for running the center efficiently, and the types of data to be stored and disseminated. In essence, the decision to go ahead with a space science data center represented a fairly major policy statement. The considerations that Fleming had in mind in recommending continued approval impinged on NASA's interagency relations, relations with the scientific community, and his perception of the kinds of research the center was designed to support.

[147] The two programming decisions were neither particularly large in relation to total agency resources nor particularly well known outside NASA. Yet a number of these decisions tended to shape the agency over the long term. The Office of Programming made the assumptions underlying such decisions explicit and presented the options available to top management. Needless to say, the policy considerations in relatively minor decisions were also present in more important ones, such as the decision to assign the management of a major project to this or that center. Consider the policy elements involved in assigning the management of the launch vehicle for the unmanned Voyager spacecraft designed to land on Mars. In December 1964 NASA issued a PAD for Voyager, assigning project management to the Jet Propulsion Laboratory (JPL) and stipulating that the launch vehicle would be the Saturn IB/Centaur. This still left open the choice of program office and center charged with launch vehicle management. A decision to assign responsibility to OSSA and the Lewis Research Center would give one result; a decision to assign it to OMSF and the Marshall Space Flight Center would give another. The advantage of going to Lewis was that the project would have a minimal impact on Apollo; of going to Marshall, that the responsibility for overall design and testing would be placed at a single center. In October 1965 NASA officials reversed themselves and decided to use the Saturn V for Voyager. There were several reasons for this reversal, but the main one seems to have been the conviction of, Marshall officials that "the Saturn V would relieve payload constraints . . . and the launches were scheduled for the late 60s and 70s, just when Marshall's work for Apollo would be slackening off." 9 The Voyager decision suggests the complexity of NASA's program planning and indicates why a division like the Office of Programming was needed.




One category of decision, authorizing advanced studies, was peculiarly sensitive to nontechnical considerations. The Planning Review Panel was created in October 1963 to pass on study proposals, particularly but not exclusively those generated internally. It is important to explain just what these studies were and why they were such a headache for top management.

By definition, an advanced study pertained to "flight missions beyond those currently approved or studies of as yet unapproved spacecraft, launch vehicle, or aircraft systems that may lead toward such future flight missions or studies leading to significant changes on an already approved configuration of spacecraft and launch vehicles."10 So broad a definition could encompass almost any kind of study, and the results of an advanced study or even the decision to authorize one could involve NASA management in various difficulties.11 Such studies were not particularly expensive; although OMSF received $20.3 million in FY 1965 for studies of advanced manned missions, the annual cost to the program offices for advanced studies was normally in the hundreds of thousands rather than in the millions of dollars.12 The difficulties alluded to were of a different sort. For [148] example, if a program office let a study contract, there was danger that the contractor would be in a particularly favorable position to win the contract that might result for flight hardware. Short of banning study contractors from bidding on hardware, NASA could resolve the issue either by making the results of a study contract available to all prospective bidders or by letting multiple study contracts, either for parallel efforts or to consider separate aspects within one study program.13 As with in-house studies, every proposal for a contractor study first had to be reviewed in Fleming's office before it was sent to Seamans, with a recommendation to sign, reject, or keep on the back burner.

A much more serious problem was that the award of or even the announcement of a study contract seemed to commit NASA prematurely to certain programs. The decision to authorize a manned space station study was construed by Congress as an attempt by NASA to present it with an accomplished fact. Indeed, it was to fend off such suspicions that the Planning Review Panel was established in the first place. Webb specifically asked Seamans to prevent studies from going ahead without his knowledge and causing trouble for him on Capitol Hill.14 Judging by subsequent events, even this was not enough. In July 1967 the Manned Spacecraft Center (MSC) issued invitations to twenty-eight firms to bid on a study of a manned Mars and Venus flyby in 1975 and 1977. MSC could not have picked a worse moment to announce such a contract, and the request had to be withdrawn. At an August meeting to discuss the NASA budget, Webb complained of rumors that


people at Huntsville and other places . . . say they'd like to keep the image before the country that somehow man is going to go to Mars and Venus. But I do think that the image of NASA when we're fighting for our lives here in the major programs ought to be one of controlling those things, or at least not make them a major matter of publicity on the theory that maybe they will elicit support.... it just seems to me that this is not the right atmosphere to be emphasizing that or having people say that if we just cut out that kind of money we could get along better, therefore . . . cut 10 percent off our budget, which is what the tendency in Congress is.15


Five weeks later Voyager was canceled; it was eliminated in conference by the congressional appropriations subcommittees that approved NASA funds, partly, it seems, because the committee members believed that the unmanned project was a first step toward a manned voyage to Mars.16 What applied to approved projects could be said to apply with undiminished force to advanced studies. They were essential building blocks for agency planning, but without some kind of coordination-with Seamans, Fleming, Wyatt, and the heads of the program offices-no other element was more likely to be misunderstood.

Another difficulty in preparing advanced studies was the potential conflict of bureaucracies. That conflict might be internal, for example, OSSA and OMSF competing for the right to carry out studies on manned space stations.17 Or it might be NASA in jurisdictional conflicts with DOD, whose interest in space stations overlapped that of NASA. The issue surfaced in 1963 and involved three distinct questions: Were advanced studies included in the Webb-Gilpatric and Gemini [149] agreements, by which NASA and DOD agreed not to undertake major spacecraft or launch vehicle development without first seeking the approval of the other agency? How far did existing study programs duplicate each other? If the President approved a manned space station for the military, what role would NASA have in supporting the program? At a meeting of the Aeronautics and Astronautics Coordinating Board Manned Space Flight Panel in March 1963, the cochairmen requested the panel to make recommendations for NASA-DOD coordination in this area.18 NASA's rationale for a space station was straightforward. First, "men and equipment had to be tested for long duration in the weightlessness of space, looking toward the time when trips would be taken into outer space and the planets. Second, a space station would be an ideal scientific laboratory in which to conduct . . . research into the basic physical and chemical characteristics of matter in space. Third, it would be easier and cheaper to assemble components for launching planetary and celestial voyages in space rather than on earth." 19 In short, the space station concept was basic to NASA's post-Apollo planning. For DOD the military value of a manned space station was very much open to question-and a November 1963 study by the President's Science Advisory Committee did question it.

NASA's positions in exchanges with DOD were that advanced studies were not covered by previous agreements, that a joint-concurrence approach would lead to delays that NASA was not prepared to accept, and that NASA could not accept an effective veto power by McNamara over its study programs. For his part, McNamara tried to pressure Webb into signing draft agreements that he sent to NASA before the agency had the chance to study them. (According to the former Director of NASA's Office of Defense Affairs, this "was a gambit used more than once by Mr. McNamara.") 20 Twice he sent Webb signed agreements, and twice Webb refused to cosign. NASA was already proposing a $3.5 million study of a Manned Orbital Research Laboratory; and while Webb was willing to "go more than half way" in meeting McNamara's requirements, he did not rule out unilateral action in case of disagreement.21 The NASA-DOD agreement of 14 September 1963 was at most a compromise. The draft agreement, prepared by the Office of Defense Affairs and signed by McNamara with reservations, provided that advanced studies on a manned station would be coordinated through the Aeronautics and Astronautics Coordinating Board; after joint evaluation studies Webb and McNamara would make a recommendation to the President, including a recommendation as to which agency would direct the project; and if the President gave his approval, a NASA-DOD board would map out objectives and approve experiments.

But this agreement raised more questions than it answered. On 10 December 1963 McNamara canceled the Dyna-Soar (X-20) orbital glider program and announced that he was assigning to the Air Force the development of a near-Earth Manned Orbiting Laboratory (MOL). DOD officials chose to regard MOL as something other than a space station, hence not covered by the September agreement. The upshot was that in 1964, as a congressional report acidly noted, "the [150] separate NASA and DOD efforts . . . appeared to be subject to only a minimum of coordination."22 NASA continued its advanced studies of space stations and began to let contracts for studies of follow-on uses for Apollo hardware. That there was an element of duplication between the NASA space station and the MOL seems obvious. That NASA did not wish to be fettered by prior DOD approval seems equally clear.

The moral seems to be that, short of the Space Council, whose coordinating authority was shadowy at best, and the President, there was no mechanism for meshing NASA policy on advanced studies with its only direct competitor. Within NASA, top management used several strategies to keep advanced studies under control. There was agreement in the agency that exploratory and feasibility studies were best done in-house, that they could be accomplished at minimal cost, and that the more detailed the studies were, the more important it became to call in industrial know-how. But there had to be some authority to coordinate studies and to prepare guidelines to resolve the problems they raised. Was it necessary, for instance, that a given study run continuously in order to keep abreast of the state of the art? Which studies were to be in-house and which contracted out? Should follow-on studies be authorized before the studies of which they were a continuation had been evaluated? Should NASA award development contracts only to those firms already awarded study contracts or to any qualified bidder? The last question is investigated in the account of phased project planning.

A task force study conducted just before the 1963 reorganization showed that NASA needed a policy on study contracts. Of 114 studies considered, which totaled $30 million, the task force concluded that 3 were appropriate but not as advanced studies; 27 appeared to be duplicative or "premature"; 3 were more appropriately done in-house; and another 17 required guidelines from management. The task force also noted that, while each program office had its own reviews of study results, "a uniform procedure for assessment and utilization [did] not exist." 23 There was no uniform review, no definition of the kinds of studies the agency ought to be doing, no indication of study priorities. Many advanced studies were not submitted for review because they were funded separately as supporting research and technology.

It was to meet these needs that the Planning Review Panel was created. Its mission encompassed not only a review of each study but also the preparation of an agency-wide study plan. Studies to be contracted out would be approved by Seamans on PADs; the panel would then review specific studies to see that they conformed to requirements. Furthermore, each review would serve as a point of departure for the next round of studies. Thus, in the fall of 1964 Seamans wrote to the program offices, asking them to state the guidelines for studies contracted for FY 1965. The time was past, he noted, when the program offices could paint with a broad brush. Based on the panel's Advanced Study Mission Review, he announced that the 1965 contract studies would focus "on a few flight mission areas.... rather than being as widely diversified as in the past." 24 OMSF would concentrate almost entirely on "the definition of a program for manned earth [151] orbital operations that will best utilize the Apollo, Saturn IB and Saturn V capability currently being developed." The emphasis would be more on using existing hardware than on using post-Saturn launch vehicles. OSSA would define Voyager and develop programs for unmanned planetary exploration. In applications, OSSA was expected to define the next generation of advanced technology satellites and to continue work on an operational meteorological satellite.**** OART would work to define the next generation of research and technology programs.

The annual review uncovered many weaknesses in the program offices' advanced studies. In June 1967, for example, Fleming wrote to OMSF about its proposal for a manned space station study. He noted that the study plan duplicated about 50 percent of a study being conducted by an agency-wide working group, that it was based on a single assumption, and that it made no comparison of the advantages of manned versus unmanned stations. Even if viewed as an OMSF exercise, the study proposal left something to be desired. Some studies would be carried out by Marshall, some by Houston, and one by OMSF itself; it was not clear whether OMSF or some lead center would coordinate and relate the studies to an overall plan. Also, the procedure whereby headquarters would direct the studies while a contractor carried them out would not work. Such work "NASA can and must do in-house. If we cannot find the time or people to carry out such work then there is a real need for a reassessment of how well our human resources are being utilized."25 How could a development center like MSC be expected to accept the results of these studies without doing a complete evaluation of its own? Why should OMSF turn to a contractor for space station configurations when MSC had the most highly qualified group of engineers in the world for such work? "A group such as this would finish with a product that is usable by NASA since, being the creation of a development center under the direction of Headquarters, it would be acceptable to both." Fleming ended by strongly urging that OMSF cancel the proposed studies. If NASA management found it difficult to control its advanced studies programs, it was not for lack of expert technical advice.

In summary, advanced studies were of basic importance to NASA planning because they marked the beginning of the R&D cycle. Whether they pertained to launch vehicles, missions, or spacecraft, advanced studies tended to set the direction of long-run agency planning, although they seem to have had little direct impact on hardware development. For this reason, management had every reason to keep a tight rein on what the agency would authorize and fund. The Planning Review Panel drafted an annual study plan; PADs were issued that encompassed each program office's studies in the six mission categories;***** while study contracts had also to be approved by the Associate Administrator, whose authority to [152] withhold such approval was formal. On the basis of a recommendation by the Planning Review Panel, Seamans in August 1967 rescinded approval of twenty-two study PADs not yet placed under contract because they were untimely and inappropriate in relation to the current operating plan and to the budget position NASA had staked out for the ensuing fiscal year.26 The agency could not do without the kind of planning that preceded detailed project definition. But management's efforts to keep advanced studies under central control were a mixed success, owing to the lack of center-program office coordination, the absence of guidelines for study contracts, the seeming inability of some program offices to do their studies in-house, and the necessity as late as 1967 to withdraw study PADs.




Few aspects of NASA planning were new. DOD had faced most of the problems of advanced systems development several years before NASA was created. How to determine the element of risk in developing a new system, how to choose between alternative systems, how to select system characteristics in advance of competitive exploration, how to present program goals independently of a proposed solution-all these problems were inherent in R&D planning, whether military or civilian. The Air Force approach to systems management-conceptual phase, definition phase, acquisition phase, operational phase-showed more than a passing resemblance to phased project planning. However, the similarity of appearance matters less than the difference in results. The differences in NASA and DOD planning for R&D have been discussed elsewhere: the NASA emphasis on one-of-a-kind rather than serial production, the relative absence of costing models for NASA programs, and the use of agency installations rather than contractors for technical direction and systems integration. 27

Given such conditions, NASA management faced two principal problems: how to make programs visible and how to direct on an annual basis programs that spanned several years. R&D programs normally received "no-year" funds; that is, the money remained available to NASA until it was spent. Agency proposals normally had to be matched against agency funds and missions; this was the problem of deciding which programs to authorize. Once authorized, R&D programs had to be funded, and officials needed projections of how much the centers were spending and proposed to spend; this was the problem of financial management and review. And any plan had to allow for changes in current programs. Put differently, the chief planning and approval documents-the PAD, the program operating plan, the project development plan, and the forms authorizing and allotting resources-were NASA's attempt to resolve problems inherent in doing R&D. The problems included the following: Given the current state of the art, is such-and-such a realistic proposal? To what level of detail should the center be required to explain how it intends to carry out the project? How can NASA absorb the major changes that can be expected to occur during the life of the project?

[153] Chapter 3 discusses the origins of the NASA planning system. The following is a summary of the major developments in chronological sequence from the system approved by Glennan in January 1961 to the revised system of 1968. Glennan had established a system that, although far too complicated and soon to be superseded, had at least the germs of a uniform planning system. Changes in 1962-1963 simplified the process: the creation of the PAD, which described the scope of the plan approved by Seamans; and the revision of the project development plan, which, instead of preceding approval, became the single authoritative postapproval summary of how the program office proposed to accomplish its objective. These documents in turn served as bases for NASA forms (506, 504) that made funds available for a project. In fiscal terms, "the PAD establishe[d] the purposes for which funds might be spent; the 506 authorize[d] the writing of checks for these purposes; and the 504 [made] the deposit in the bank account which [made] these checks good." 28

This was the official planning and approval system that obtained until the 1968 reforms. Not that the format remained unchanged during this period-far from it. The PAD format was revised several times. In 1963, for example, it was changed from being reissued annually to issuance on a "cradle-to-grave" basis by the Associate Administrator.29 The phased project planning directive of October 1965 added a project definition phase to the cycle and slowed the process by which management moved almost directly from feasibility studies to approval of full-scale hardware development. However, no detailed guidelines were published until the summer of 1968. By then, Webb and Finger had moved closer to the point where planning, authorization, R&D funding, and budget formulation would dovetail into one system. Each PAD would correspond to one line item in the NASA operating budget, would be updated annually, would be signed by the Administrator or another official (e.g., the Associate Administrator for Organization and Management) who was delegated signoff authority, and would constitute a contract between headquarters and the center designated as project manager.

Even in a chapter on predevelopment planning, something must be added about NASA's financial management reporting systems. First, program authorization was a continuing process; thus, headquarters needed reports of actual as well as projected outlays. Second, NASA needed yardsticks of cost-effectiveness to assess the costs of future programs. In other words, the agency needed current data in order to evaluate future programs. The data normally appeared in two different formats. The program operating plan (POP), a quarterly submission by the centers to Seamans, showed actual obligations through the previous quarter and estimated future obligations through completion of the project. The POP served as a benchmark for measuring performance and R&D budget estimates, and as a basis for resource authorizations. In a sense, "pieces of the POP [were] approved by individual executive actions" taken by Seamans.30

The other reporting format was the contractor financial management system.31 It was intended to provide NASA with a financial tool for planning and [154] controlling project funds and with a basis for reports used by headquarters for overall planning. The effectiveness of such a system depends on the accuracy with which financial data can be reported, the avoidance of unnecessary detail, and the use of a common baseline for elements such as cost reports and change updates. Originally, NASA used a single format, form 533, which was approved by the Bureau of the Budget in April 1962 and was revised in 1964. But it would be another three years before NASA had an integrated financial management system. A 1965 survey by the Financial Management Division (Office of Administration) revealed a sad lack of uniformity: "personal choice of 'home-made' devices; little comprehension of the real purposes . . . of the system; nonuse by contractors or outright gamesmanship . . . very major downstream restructuring of entire project reporting levels and data specifications to accom[m]odate false starts... inadequate cross-communications between Centers and Contractors, Centers and Centers, and within Headquarters."32 The survey's findings led to revised procedures, which were published as two manuals in March and May 1967. The new system differed from its predecessor in three important respects. It replaced the single-format 533 system with four monthly and quarterly 533 reports; it permitted the contractor to use its own accounting system and time periods in preparing reports, so that NASA and the contractor would possess the same information; and the "blank stub" of the 533 forms (i.e., the absence of prescribed line item entries) enabled project managers to use whatever work breakdown structure they wished, provided that it was compatible with the NASA agency-wide coding structure used to classify all agency activities for reporting and budgetary purposes. Thus the revised system became an effective tool for preparing estimates of NASA's current and long-range needs.




Intermediate-range planning, program authorization, and financial management all flowed into one another, but the ability to plan and to budget depended, in the last analysis, on NASA's ability to prepare accurate cost models of its R&D programs. The larger the program, the more difficult this was; and what NASA learned from its costing studies was, only too often, that their main value was retrospective. In the Orbiting Astronomical Observatory, Orbiting Geophysical Observatory, Surveyor, and Nimbus programs, which were begun in 1959-1960, costs grow by a factor of four to five subsequent to project initiation;33 the principal reason was the lack of a well-defined spacecraft design or a clear definition of experiments to be developed (see table 6-1).

As a 1969 report noted, "one might have predicted the cost increases that were experienced as the spacecraft designs became better defined, the technological problems identified, and the experiment development and allied supporting effort established."34 There were substantial cost benefits where technology could be transferred from one system to another with little design change.



Table 6-1.-Cost growth in selected R&D projects, 1958-1966, in millions of dollars.




Initial Estimate

Unit Cost

Final (or Current) Estimate)

Unit Cost

Unit Cost Ratio (Final/Initial)

Total Cost

Number of Spacecraft

Total Cost

Number of Spacecraft




























































































Mariner Mars 1964




































Mariner Venus 1967









Mariner Mars 1969









Subtotal OSSA





2 515.9












1.9 1






1 283.0



2.4 1



12 009.0



19 741.0



1.6 1

Subtotal OMSF


12 738.0



21 408.0






13 522.3



23 923.9




1 Cost ratio based on total cost.

Source: Memorandum, DeMarquis Wyatt to Thomas O. Paine, "NASA Cost Projections," 10 Apr. 1969.


[156] When such changes were required to adapt a technology, as for Lunar Orbiter, they could be enough to nullify any gains. With Orbiter, there were two sets of technology transfer: the spacecraft design itself and the camera system based on Air Force camera technology. To adapt the camera system required major changes in storage, film developing, and remote readout, changes that contributed greatly to Orbiter cost increases.

Ignoring the small projects that used proven technology, there were three cases in which it was possible to make accurate estimates of planned or current programs: when spacecraft and experiment design were established before the start of the project; when NASA bought production-line items, especially sounding rockets and certain launch vehicles; and when NASA compared current programs with the funding levels originally authorized. In the first case it was possible to reduce design changes during the development phase. In projects such as Relay, Syncom, and the Applications Technology Satellite (ATS), the cost increases were only about 1.1 to 1.3 times the original estimate. Each was designed before NASA began the project, and in the case of ATS "a substantial amount of design and demonstration of critical subsystems was conducted . . . prior to its definition.35 In the case of serial production most of the early estimates for launch vehicles and propulsion systems seem to have been quite unrealistic; one official called them "totally ridiculous." Here, the agency stood to gain much by accurate estimates. The tendency of the program managers was to make estimates based on what vehicles should cost and to ignore many of the hidden costs of development, especially the cost of assembling and maintaining the team that would produce the launch vehicle. The key was to separate more precisely the nonrecurring cost of producing the first unit-whether the motor, the airframe, the guidance system, or combined components-from the recurring costs of serial production. Despite the original cost overruns and schedule slippages, a launch vehicle stage like the Centaur could be produced serially once the hardware met design specifications. Today, Centaur and Delta are funded under R&D only because no other category seems to fit.

Because of the size of the programs, it is the third category that is of most concern. The concept of a cost overrun has meaning only in relation to some baseline of funding. The management information systems used by NASA were specifically intended to make these overruns visible. Such were the 533 reports, the manpower utilization report, and the NASA PERT/COST system that most prime contractors were required to use in reporting summary time and cost data. These systems were not invariably effective or welcome within NASA. A recent study of Polaris, where PERT-program evaluation and review technique-was first used, concludes that


PERT did not build the Polaris, but it was extremely useful for those who did build the weapon system to have many people believe that it did. . . . the program's innovativeness in management methods was . . . as effective technically as rain dancing.... It mattered not that management innovations contributed little directly to [157] the technical effort; it was enough that those outside the program were willing to believe that management innovation had a vital role in the technical achievements of the Polaris.36


The effectiveness of PERT on NASA programs is also open to question; in many programs PERT was introduced too late to make much of a dent in funding and schedules.37 The point is that by 1964 the Office of Programming had in hand sufficient data and experience to analyze some of NASA's major programs. A study of Gemini conducted in June 1964 uncovered a deficit of $83 million for completing the spacecraft alone, an estimate based entirely on information available to management-particularly the monthly Project Gemini "OMSF Program Status" reports (also known as SARP charts). The program was broken into five major categories, and program schedules were examined from inception.38 From these, the programming task force discovered that the program was overrunning both cost and schedule, that OMSF set about solving problems at the expense of time, and that total costs were likely to grow by a multiple of at least four. The evidence was sufficiently persuasive to lead to a drastic overhaul of Gemini design, management, and scheduling. In particular, the intervals between launches had been lengthening; MSC wanted launches two to three months apart, later four, before Mueller reduced the interval to two months.

The 1963-1965 period marked the maturity of NASA cost analysis. Besides the Gemini survey, the Office of Programming carried out several major joint studies: the Hilburn task force reports of September-December 1964, mentioned in chapter 4, which established the relation between schedule slippages and cost overruns; a study by the Aeronautics and Astronautics Coordinating Board of launch vehicle costs, which was based on the distinction between recurring and nonrecurring costs, and the unit cost for developing the first article in a production series; and a "cost validation" study at Marshall in the summer of 1963, a study that influenced the decision to implement phased project planning. The cost validation study was carried out by staff drawn from the Office of Programming, the Office of Administration, and OMSF. The study was ordered because Seamans was uncertain that centers like Marshall had a master plan for "pacing items" + in the Gemini and Apollo programs. The task force wanted to know the depth and status of mission plans and the basis for schedule and cost estimates; as one team member wrote in his daily log, Wyatt wanted to be able to tell Seamans that "there is a master plan-Behind the master plan are schedules-Behind the schedules there are work plans-That the work plans have been priced out."39

The studies were useful to the extent that top management wanted them, the Office of Programming had the staff to do them, and all concerned could agree on what they were trying to do. For officials to be able to plan at all, they had to understand the relations between schedules and costs or between the direct and indirect costs of launch vehicle production. No study could be effective unless officials at the highest operating level to which the study was addressed were [158] interested in seeing its recommendations carried out. Some of the most successful reports, like the Booz, Allen and Hamilton study of incentive contracts (pp. 103-105), did not claim to make policy as much as point the way toward carrying out a course of action that had already been decided. Similarly, the costing studies of Gemini or the cost-validation analysis of the Saturn V and J-2 programs took those programs as givens.40 The studies sought to reduce, if not eliminate the uncertainties inherent in the R&D process-uncertainties attacked in different ways by the PAD system and phased project planning.




To summarize the discussion to this point: the structure of NASA programming was intended to make planning more realistic, to test the validity of the planning, to "harden" concepts into development projects, and to serve as a mode of continuous review. Therefore, a document like the PAD could serve many purposes. It authorized projects at every stage of the R&D cycle from advanced study to advanced development; it served as the basis for the detailed project development plan, which was the prerequisite for hardware development; it preceded the issuance of every resource allotment; and it was the foundation for periodic financial management reports, such as the quarterly program obligating plans submitted by the centers through the program offices to Seamans. Since each phase of the R&D process (which in 1964 included advanced studies, project definition, and hardware development) called for a separate PAD, Seamans had three opportunities to intervene in the cycle.

Viewed in this light, phased project planning (PPP) appears to be what it was, the normal sequence in management theory for R&D. NASA had actually been phasing planning all along (see p. 84); PPP was introduced at a time when NASA had very few new starts, and the main reasons for enunciating the concept were to make actual practice more uniform and to give management at least one additional point at which to intervene. The issues that require explanation are the internal debate that preceded even the first tentative statement of policy, the delay between the policy directive of October 1965 and the detailed guidelines of August 1968, and the failure of line officials subsequently to understand or use PPP. A brief account may reveal something of each program office's approach to organizing complex programs. The difference between the 1965 and 1968 directives owed something to the three-year interval in which the agency tried to make PPP work. But it owed even more to the context in which the 1968 guidelines were drafted: They were only part of a system intended by Webb to channel resource authorization, planning, and program review through his office.

The 1965 directive established four phases in the life of a project: advanced studies (phase A), project definition (phase B), design (phase C), and [159] development/operations (phase D).++ In general, the program offices took exception not to the concept itself but to the terms in which it was couched. The draft guidelines did not recognize the need for establishing a project organization at an early date; they said little about the relative responsibilities of the program offices; they did not show how the concept related to the budgetary, POP, and procurement cycles; they did not distinguish very clearly between each of the phases; nor did they justify four decision points rather than the three that had previously been used. Most of the specific criticism was reserved for phases A and C. It was (and remained) unclear how advanced studies should be carried out-whether directly by the centers, under contract, or by a mixture of the two. One program manager criticized the phase A concept because it would involve several centers studying the same objective.


If these studies are to be of any value, they will essentially be in competition with each other. If they are not competing, it becomes doubtful that the centers' more competent personnel have been assigned to the task.... Additionally, each NASA center appears to be saturated with work and no relief in sight.... it is doubtful that large quantities of good studies will be generated from which to be selective. This is the foundation upon which the entire procedure was established.41

Although the avowed purpose of PPP was to foster "maximum competition," neither the preliminary drafts nor the published directive worked out such a procedure. One of the problems in planning a major R&D program is to find firms with the capability to serve as prime contractors; in some very large projects NASA has had to rely on a single source or face the alternative of creating, at enormous expense, competition where none existed. Followed strictly, the 1965 directive would have mandated full competition at every stage-an unrealistic state of affairs when one firm was the obvious choice for follow-on development work. It left open the question of whether competition in phase D should be open to any firm or only to phase C contractors, and it ignored the real difficulty in having the program offices work with bidders who had not been involved in the earlier phases of PPP. It was common knowledge that a firm not involved in the first two phases had no real prospect of competing successfully in phase C. In fact, the program offices continued to do things the old way, even after 1965. In June 1967 when NASA had to select contractors for phase C of the Voyager spacecraft, competition was limited to phase B contractors. As one center director put it, "nothing significant would be gained by attempting at this time to enlarge competition." 42

Thus the 1965 directive went too far, yet not far enough. It was very specific in listing the benefits of PPP and quite vague in explaining how the process would [160] work. For the program directors, agreement "in principle" obscured disagreements over detail. The directive omitted or glossed over many significant areas of planning: It set no cutoff point between large projects and supporting research and technology; it left open the possibility of limiting competition in the final phase to contractors already involved in detailed project definition; it said nothing about science, experiments, or payloads; and it did not specify how much time might elapse between phases. It is no wonder that the job of preparing detailed guidelines, originally assigned to Wyatt, went nowhere. The sheer difficulty of getting nine or ten headquarters offices to agree on anything was enough to stop Wyatt's people in their tracks, and agreement, when reached, was at a lowest common denominator level.

That PPP was implemented at all was due to Webb's determination to regain the control over NASA that he believed he had lost sometime before the Apollo fire. Chapter 3 summarizes the changes of 1967-1968: the creation of an Office of Organization and Management to bring the program and functional offices under Webb's control; the separation of technical review from budget preparation; the reorganization of the Office of Facilities to bring about master planning for the agency; finally, the overhaul of the system by which programs were planned, authorized, and reviewed. Webb wanted to know-because he did not think that he knew-what he was approving whenever he signed a PAD. By the summer of 1967 Harold Finger's Office of Organization and Management, especially the Planning Division, had prepared detailed guidelines, most of which were issued piecemeal the following year.43 The new system would emphasize supervision, whether directly by Webb or by delegation to Finger, and make it possible to track every approved project down to its smallest work package.44

The basic features of the system were outlined in a memorandum dated 27 January 1968 from Webb to the agency's key officials. First, there would be a NASA operating plan to serve as "the official consolidated statement of NASA resource use plans for the current year." Each item in the plan would have its PAD, which would set the objectives, and would specify funding and work authorizations. "Together, the operating plan and the PAD system [would] provide a double entry type of approval, control and audit system within which both program and administrative objectives [could] be achieved. 'The program directors had to assume several responsibilities in submitting a PAD: "first, to approve and endorse its substantive and technical merit; second, to take into account all related administrative and functional requirements; and third, to reflect these considerations in the documentation he sends forward through the Associate Administrator for Organization and Management." Before the PAD reached Webb's desk it would first go to Finger for his signature. Webb wanted to "cut out the concurrence mill, which sometimes took a year, with 25 or 30 concurrences required to prove something."45 By delegation from Webb, one signature would be enough. He also hoped for visibility; he wanted to know the right things so that "no one could bury a problem and keep it buried."46 It was of the greatest importance that the Office of Organization and Management was built around- [161] almost created for-an R&D person, someone who could meet the program offices on their own terms. The Apollo fire had shown just how much had been hidden in the organization; and Apollo Applications, with its budget stretchouts and reprogrammings, had made the need for tighter fiscal controls even more obvious. Finger had to be able, if necessary, to say no; to refuse to sign a PAD if in his opinion it did not mesh with the NASA budget.47 There was always a temptation for such an official to let the program offices do as they pleased. But as Finger noted, "if you do that very often you completely confuse your system. You no longer have a system."48

The PPP guidelines, when finally issued, were not superimposed on the system just described. If anything, they were a sort of commentary, in which the PAD, the project plan that supplemented it, and the request for proposal all fell into place as parts of an encompassing system. Phase A was now preliminary analysis; phase B, definition. Each phase would be covered by a PAD, and every current year portion would be revised as necessary. The PPP guidelines clarified those matters that had led to so much disagreement. For example, competition in phase C (design) would be restricted to firms capable of going on to phase D (development/operations). Other details included the in-house nature of the preliminary analysis phase, the type of contract to be used (fixed-price or cost-plus-fixed-fee in phases B and C, incentive contracts in phase D), and the role of the centers and program offices in monitoring contractors during the final development stage. The system did not lessen responsibility below the Administrator's level; it was not intended to make R&D work self-regulating or mechanical, which would have been self-defeating. In simplest terms, its purpose was to inform Webb or Paine or Newell of what it was he was signing, hence what he had to defend before the Bureau of the Budget and Congress.




The foregoing account of NASA's formal approval systems necessarily leaves many questions open. For all the overlap that existed, the program offices were created and maintained for different but complementary purposes. This section provides brief surveys of planning strategies in each of the four program offices, whether its function was support (OTDA), defensive research (OART), discipline oriented (OSSA), or mission oriented (OMSF).


The Office of Tracking and Data Acquisition (OTDA)49

Elevated to program office status in December 1965, OTDA had neither pro grams nor centers. In most respects, its role set it apart from the other program offices: the requirement that it support all NASA programs, its extensive inter national activities, its almost exclusive use of support service contractors to operas and maintain its tracking stations, and its unbroken success in meeting its [162] schedules. OTDA's reliance on improvements in the state of the art was no greater than that of any other program office; yet the connection between technology and mission requirements is perhaps most obvious in OTDA. What has been most apparent in OTDA planning since the early 1960s has been the office's ability to keep funding requirements level and predictable. The office has largely accomplished this by closing many of its overseas tracking stations, consolidating its manned and unmanned networks, using fixed-price and award-fee contracts for facilities construction and operation, and developing a Tracking and Data Relay Satellite System to supplement its ground networks. By such means it has been possible to reduce the Deep Space Network (DSN) to three stations spaced at intervals of 120 degrees along a longitudinal axis and equipped with 64-meter radio antennas. More than any other program office, OTDA has managed to reduce the element of uncertainty inherent in R&D.

OTDA's success has depended largely on developing a sophisticated approach to planning. It must anticipate the needs of the centers, other program offices, and principal investigators. It must work out its requirements for supporting research and technology. It must have people stationed at the centers to assist in preparing the requirements documents that are the basis of OTDA planning: the system instrumentation requirements document, the network support plan, the work authorization document, and the like. Such planning demands a continuing dialogue between OTDA, program managers, and JPL and Goddard, the two installations responsible for almost all network support. Or rather, what one sees are two planning groups working side by side. On the one hand, OTDA has always planned its long-range network requirements. On the other hand, the centers must document the kinds of support they need for particular missions. On the OTDA side, the cycle begins with advanced studies to review and update network planning: new facilities, automatic data processing equipment, funding, and the like. This is followed by systems definition, which brings together OTDA and center personnel who negotiate requirements and draft a project plan. At every point, a complex flow of documentation is generated. The program offices provide a formal requirements document; this is validated by OTDA, which prepares its support plan; and JPL or Goddard then prepares additional material to justify the network support it is best able to provide.

The conspicuous feature of OTDA planning is that each case is determined by the characteristics of the mission to be supported. In general, unmanned deep-space probes have proved the most difficult to support, but the data rates of most spacecraft have increased by several orders of magnitude since the days of Explorer 1 and Mariner 4.+++ Particularly in the past decade, OTDA planning has involved tradeoff considerations, for example, the advantages of placing additional tape recorders on orbiting spacecraft versus augmenting the supporting ground [163] network. The relevant point is that OTDA must, to an extent, plan independently of specific future requirements. Once the general characteristics of future programs become known, the office begins feasibility studies. Formerly, OTDA let study contracts, but it now does most of the preliminary work itself. Consider, for instance, what was involved in designing the DSN 64-meter radio antenna at Goldstone, California, which was put into service in April 1966. Once the features of lunar and planetary programs were understood-extremely weak signals, long cruising periods, an anticipated increase in date rates-it became possible to plan network support. To build the earlier 26-meter antennas at Goldstone and elsewhere had been difficult enough; to build the larger one meant resolving severe technical constraints, once it was shown that one big antenna would be more cost-effective than several smaller ones.50 Enormous steel castings had to be built to take the weight of the dish-shaped antenna; allowance had to be made for wind velocities and distortion caused by gravitational pull (both sides pulled in different directions when the antenna was not pointed at zenith); and the dish itself was made to rest on a thin film of oil, which served to cushion the mass of the antenna and to shut it down if the film's thickness decreased.51 JPL let parametric studies (see note 11) and followed them with a preliminary engineering report and a design validation study. This was phased project planning before that term was made official, and Seamans singled out the construction of this great antenna as "almost a textbook case" of the system he recommended for the agency as a whole.52

In sum, OTDA planning was guided by three principal considerations. First the characteristics of the mission dictated, within rather broad limits, the kind of support that OTDA provided. Was the mission manned or unmanned, Earth orbit or deep space? If in orbit, was it synchronous or elliptical, and how many contacts per orbit were needed? Were the data needed on a real-time basis, or could they be stored for later retrieval? Second, OTDA was not simply a passive witness to decisions made elsewhere. Planning involved a three-way exchange between OTDA, the program office requesting support (and in the 1960s, DOD, which provided tracking support for Apollo), and JPL and Goddard. Third, OTDA has coped successfully with the vastly increased data transmission rates of the newer spacecraft systems because it has been able to use and build on existing capability. The first 64-meter dish was a major breakthrough; the two that followed, at Madrid and Canberra, were almost routine by comparison. It has been OTDA policy to increase "existing capability . . . only after thorough analysis of support requirements." By insisting on coordination with program offices "from project inception until achievement of mission objectives," the office was able to anticipate the facilities and technology needed a decade later.53


The Office of Advanced Research and Technology (DART)

It has been said that OART, more than any other program office, carried on the NACA tradition of doing advanced research in-house. But this is only partly [164] true, since what had once been the mission of an entire agency was now a subordinate part of the much larger entity that succeeded it. Like NACA, OART was charged with conducting research into the underlying principles of aeronautical and space technology, reducing "complex theory to design procedures," and testing systematically "to obtain design data for . . . vehicles of the future." 54 But OART had to go beyond NACA practice by "proving" a concept, that is, by building hardware to test it, whether or not the actual hardware found its way into future systems. The special features of OART work included a relatively large number of open-ended or continuing programs with no specified completion date, the role of the Associate Administrator for Advanced Research and Technology in reviewing supporting research and technology proposals by his own and other program offices, and the creation of a Mission Analysis Division-located at Ames, although attached to headquarters-in February 1965 to do advanced planning in order to identify future technology requirements.

All these features tended to change the role of the older centers. Some officials, notably Dryden, objected strenuously to the research centers' involvement in project management, which he preferred to leave to the newer development centers. Other NACA veterans, like Silverstein, believed that the older centers needed some development projects in order to open new research possibilities; if the centers had a few projects, they would inevitably spill into the center's research programs. In this, Silverstein's view prevailed, but there was a price to pay. At Ames, for instance, the changes of 1961-1965 had a profound effect: the transfer of many research division heads to headquarters, the organization of research divisions around disciplines rather than specific facilities, the increased use of wind tunnels for development work rather than research, the establishment of a Life Sciences Directorate in a center hitherto devoted exclusively to research in the physical sciences, and an increase in manpower to cope with the management of those projects (and the inevitable contracting for hardware and services) assigned to Ames.55 In all this, much was undoubtedly gained; what was lost is harder to describe. The dilemma for OART planners was to justify the kinds of research done at the centers. Formerly, research in cryogenics or structural dynamics could be justified on the ground that it was worth doing for its own sake. But OART was created to foster research that could be justified on its merits and that would feed into NASA programs. The question posed was this: How could OART coordinate a number of small-scale efforts and organize them in related fashion, yet not tie each research task to a specific mission or completion date? As Finger, himself a product of Lewis, warned,


Any effort to define the experimental engineering as mission research and technology . . . weakens the entire basis for OART and for the OART program. It makes that program susceptible to assessment of the missions and dates defined rather than to the basic advances in capability to be generated by that work.56


This background serves to explain the difficulty, for OART, of sponsoring research that was at once independent of mission objectives and tied to NASA [165] planning. How, then, did OART plan, and how successful was it? Before its reorganization in October 1970 OART consisted of a Program and Resources Division (established from preexisting units in July 1964), seven program divisions,++++ and Mission Analysis. 57 Mission Analysis functioned as DART's long-range planning group, and it was deliberately located at Ames to provide the group with more of a research atmosphere than would have been possible in Washington, D.C. As with OTDA, the flow of information was two-way: a give and take between the division, other parts of NASA, research advisory committees like the Advanced Research and Technology Board of OART, and other agencies, especially FAA. The purpose of Mission Analysis was to identify options for future planning and to estimate the time in which a certain technology would be needed, what was called the technology readiness date. Not that such work had to await the creation of the Mission Analysis Division. Earlier, Langley had been studying the feasibility of an orbiting Large Space Telescope; Lewis was working on advanced propulsion systems; while several centers carried on work in shorthaul transport aircraft. What set Mission Analysis apart was that it functioned as a planning staff for the entire program office; it concentrated on missions rather than on state-of-the-art improvements; and much effort was spent on mode analysis, that is, the choice between alternate means of conducting a mission.58

A second planning area was DART's review of the agency's supporting research and technology (SRT). Related to this was the creation of the Program and Resources Division, which was to bring in-house research under some kind of management control.59 Besides the director, there were three subdivisions, each under a deputy director: Program Coordination, responsible for analyzing OART programs "for proper balance and for program overlaps or omissions"; Resources Management, which dealt with budgeting, funding, and reprogramming; and Administrative Management, which oversaw personnel, technical reports, congressional liaison, and the like.60 The division gave OART program balance, even at the cost of going over the heads of the program division directors.

The major problem in coordinating SRT was the sheer quantity of the work. The Office of Space Science and Applications alone was spending over $80 million on SRT in 1968, most of it tightly linked to near-term programs. OSSA projects ran into the hundreds: large unfurlable spacecraft antennas; improved pointing accuracies for orbiting observatories; guidance, control, and navigation systems for launch vehicles; lunar and planetary roving vehicles; sensors for application satellites-to mention only a few. Besides the work carried on in his own office, the Associate Administrator for Advanced Research and Technology had to be aware of such programs and the potential for wasteful duplication. He was supposed to review agency-wide plans for SRT; review the technical content of each SRT task, as these programs were called; and recommend a total agency program and the assignment of tasks to each program office.61 Internally, he had [166] to have the staff work that would enable him, if necessary, to say no to his division directors, or that would give him independent support where divisions refused to cooperate. This twofold problem-reviewing agency-wide SRT and meshing his office s programs with those of other program offices-was in some ways the opposite of that of OMSF: Where OMSF had a few very large programs, OART had a plethora of smaller ones, some of which, undoubtedly, had been authorized because they were "nice" to do.

If OART had a specific problem, it was lack of coordination between its own programs and those of other offices. There were too many PADs required for OART tasks; too little flexibility in allowing the centers to reprogram; and no mechanism for linking technology disciplines in one area, like avionics, with aircraft technology, which was in a separate category.62 This problem was aggravated by two others: the absence of a fixed percentage of the NASA budget for research programs, so that dollar levels remained constant or actually fell; and the absence of management continuity. OART had five successive directors between 1962 and 1969. This state of affairs, a chronic one in the upper levels of Government, was especially damaging to R&D management. Writing in 1969, one observer noted the "continuing short-term shifts of objectives . . . inadequate horizontal communications between centers . . . a thin middle management" and the tendency to label people as "NACA types," "aerodynamics types," or "vehicle types."63 This was the time when NASA began to adopt the program authorization system described earlier in this chapter. Partly to accommodate the new system and partly to handle its internal problems, OART made several changes between 1968 and 1970. It reduced the number of PADs from 30 to 8, the number of congressional line items from 8 to 3 (aircraft technology, space technology, and advanced research and technology), and the number of work units-the basis for OART reporting by the centers-from 5000 to 500 "Center Technical Objectives Resumes."64 The 1970 changes were intended to give OART programs a focus and a consistency they had sometimes lacked. Aside from changes in nomenclature, these reforms included establishment of a research council to ensure a balanced research program and authorization of the program division directors to issue instructions to the centers over their own, rather than the Associate Administrator's, signature.

Interagency studies, DART's third planning area, are best represented by the Joint NASA-DOT Civil Aviation Research and Development (CARD) Policy Study, begun in 1969 and completed in 1971.+++++ The specific recommendations of that study are of less concern here than the manner in which it was carried out and the reasons for its success. The study group had a specific objective and precise terms of reference: The House and Senate committees that authorized the NASA budget wanted to know the benefits accruing from a given level of R&D. Here, [167] NASA's role was almost a throwback to NACA's support for the military. Furthermore, the study group had a literature of policy studies on which to draw, from the 1948 "Finletter report" to the 1969 report of DOT's Air Traffic Control Advisory Committee. Thus the interagency working groups and the consultants who participated had some notion of how their work would tie in with and comment on previous policy studies. What made the coordination of NASA and DOT even tighter was the role of those NASA employees, including many top OART officials, who had gone to work for DOT. Moreover, the two agencies set up a joint office in January 1972 to handle followup work in three areas-aircraft noise abatement, airport congestion, and the need for improved short-haul transport-singled out in the report as high-priority items. In other words, the interagency team viewed its study as only a first step toward implementation of its major recommendations in civil aviation. And the final report was noteworthy in recognizing the importance of nontechnological constraints, such as regulatory systems, the social impact of airport congestion, and the cost-benefit effects of various levels of R&D funding.


The Office of Space Science and Applications (OSSA) 65

In turning from OART to OSSA, certain differences of program size and advisory structure are immediately obvious. OSSA sponsored programs much larger than those of OART, while the content of the programs-much more than in OART-was determined in part by outside advisors to NASA. But the term "advisory" scarcely does justice to the role of the Space Science Board of the National Academy of Sciences or the Space Science and Applications Steering Committee (SSASC), which, established in May 1960, assisted OSSA in selecting scientific payloads for flight missions. SSASC and its subcommittees served many purposes: they strengthened contacts between NASA and the scientific community, gave representation to various interest groups, and acted as source evaluation boards in choosing principal investigators. Their functions were legal as well as advisory, since NASA could not negotiate exclusively with a university investigator without SSASC approval.

The relation of OSSA to its advisory committees was one of the most serious policy issues facing Newell and John E. Naugle, who succeeded Newell as Associate Administrator for Space Science and Applications in September 1967. To say that the problem involved differences between NASA and outside scientists over the scope and functions of advisory committees is to underestimate the complexity of the issues. First, agency officials sought to avoid setting up boards so structured that NASA would be bound by whatever advice they offered. This had been an issue between NASA and the Space Science Board as early as 1959, when NASA acted to make the board "less of an independent advisory group with a role in initiating policy and more of a service entity responding within carefully prescribed limits to tasks specified by NASA."66 Thus, when NASA proposed developing an Orbiting Astronomical Observatory (OAO) in 1959-1960, the [168] board recommended that NASA instead support rocket- and balloon-borne experiments in astronomy. Only in 1962 did the board bow to an accomplished fact and endorse the OAO.++++++ Similarly, NASA rejected the recommendation of an ad hoc Science Advisory Committee in 1966 that the agency establish a general advisory committee of non-NASA scientists reporting to the Administrator. Webb had a history of rejecting this proposal because he thought a committee of outsiders might interfere with his authority to make policy for NASA. If established, such a committee would blur the lines between advising and policy making, assume some of the functions of the Space Science Board, serve as a crutch for a weak Administrator, and take over functions already delegated to the Deputy Administrator and the heads of the program offices.67 To Webb and Newell, the pros and cons of a general advisory committee reduced themselves to purely administrative terms. To the members of the Science Advisory Committee, its proposal was Justified by frustration in serving on committees chaired and dominated by NASA employees.

Another problem was the relation between the various advisory groups on which OSSA drew. The complexity of the advisory process more than matched the complexity of the programs for which advice was sought. By 1967 the system of the early 1960s was no longer adequate. There were no guidelines explaining why or whether NASA needed such groups, how they were to be used, or the jurisdictions of the Space Science Board, the Missions Boards composed of non-NASA scientists and established in 1967 to map out overall strategies for NASA science programs, and the SSASC subcommittees. In administrative terms the structure of NASA advisory boards looked backward "to the days of discrete programs rather than forward to flight and research environments characterized by high degrees of interdependence between disciplines, between science and engineering, and between techniques of flight investigation."68 To NASA, the way to make the system work was to bring in the most capable scientists to shape the content of space science, while keeping control of programs in NASA hands. But to many scientists, the advisory process could not be a dialogue between equals because as outsiders they could have no authority for final decisions and could not know as much about NASA programs as NASA employees did.69

Outside scientists assisted NASA as advisors, as principal investigators, and as members of boards to evaluate proposed experiments. How did OSSA, building on their work, organize and plan its programs? Organizing space science was no simple matter, since each program was a combination of scientific payloads, the spacecraft that flew them, and the vehicle (developed at non-OSSA centers, principally Lewis) that launched them. When considering a potential mission, it was necessary but not sufficient to ask, "Are the scientific objectives worthwhile?" OSSA officials had to go three steps further: "Is it technically feasible?" "Are there sufficient people to do it?" "Can we get the funds to support it?"70 The [169] OSSA program structure was designed to resolve these questions; its purpose was to combine the evaluation of proposals for basic research with the management of programs calling for engineering skills of a high order. Figure 6-1 illustrates some of the most important features of the OSSA organization.

The fundamental organizational principle was "the establishment of a manageable number of technical offices to handle separate program areas."71 Each division was intended to be as self-contained as was practical; each contained flight programs related to common objectives; and, without exception, scientific discipline groups were located in the divisions they were primarily intended to serve. Three other features, not clearly brought out by the chart, are also noteworthy. Each division contained a small Program Review and Resources Management group to provide administrative support; and except for Voyager, each had its own Advanced Programs and Technology group to assist in future planning. Furthermore, it was OSSA policy to pair scientists and engineers at each operating level; where the head of one division was a scientist, the deputy was an engineer, and vice versa. This practice, which Newell transferred from his experience at the Naval Research Laboratory, was designed to avoid the pitfalls of a strictly discipline-oriented approach, in which neither side had the ability to see the total picture. For this reason, OSSA management insisted that scientists named as principal investigators had to be prepared to get their hands dirty. The payload had to meet several criteria, as indicated earlier: cost, compatibility, and competence. The outside scientist had to become an insider, had to learn the engineer's language, had to grapple with the unavoidable tradeoffs in turning a research concept into flight hardware.

Another important OSSA concept was the distinction between the headquarters program manager, who was "the senior. . . staff official exclusively responsible for developing the Headquarters guidelines and controls," and the project manager, who was "the senior . . . Iine official exclusively concerned with the execution of his project."72 This distinction was not unknown elsewhere, but OART tasks rarely rose to the level of projects, while OMSF project managers tended to be systems managers within very large programs. The program manager reviewed the effectiveness of center management, identified alternate courses of action, and developed a close working relation with the project manager responsible for the effective day-to-day management of the project at the field installation. Moreover, the installations' roles and missions were quite distinct. Wallops Station managed NASA's sounding-rocket program; Goddard handled Earth-orbital and applications satellites; while JPL, a contractor-operated facility working for NASA, managed the Deep Space Network as well as a significant part of the agency's lunar and planetary programs.

The existence of these installations once more raises the issue of the purposes for which NASA research centers were being maintained. At Goddard, with some twenty flight projects in 1967, the maintenance of so much scientific and engineering talent in a Government laboratory could be defended on several counts. The Government could not contract out its responsibility for determining that it...



Figure 6-1. The OSSA Organization as of 1967.

Figure 6-1. The OSSA Organization as of 1967.


[171] ....was getting good science for its money. It needed people who could build at least one subsystem of the spacecraft they had designed to fly. Goddard management chose to run flight projects in one of three ways: designing and building a spacecraft in-house (e.g., the Small Scientific Satellite); monitoring a contractor who designed and integrated the subsystems (e.g., the Orbiting Observatories); and following the procedure used in certain advanced systems, like the Nimbus weather satellite, in which the center "actually bought the subsystems and acted as spacecraft contractor and hired an integrator.... The Nimbus approach was twofold, to not only monitor, but you get up there in their plant and you are right over their shoulder."73 Once the center had developed several strong discipline areas, it was even better equipped to do its work, as scientists in one discipline began to work with and consult with people in related disciplines. For example, people in planetology worked with people in Earth resources, or scientists in optical astronomy worked with colleagues in meteorology, since the different divisions used the same general type of instrumentation. Moreover, the scientists who worked in these discipline groups performed important services for the whole of NASA. They evaluated research proposals for headquarters, advised other Government agencies on the value of the space program in furfilling their purposes, and were detailed as experts to the program offices for limited periods. Finally, by attaching project scientists to each flight project, Goddard management tried to ensure that the spacecraft managers and the principal investigators would understand what the other was doing. The function of the project scientists was to bring about a "cross coupling and understanding of the needs of the experimenter . . . and what the project's problems are."74

So far the presentation has been limited to a still picture of the OSSA system of program planning. The results of OSSA planning presupposed the following elements within the organization: the existence of a strong in-house capacity to design programs, combined with the ability to integrate experiments with flight hardware; the establishment of a manageable number of technical divisions to handle separate program areas within OSSA; the cross-fertilization of scientific and engineering skills at each operating level; the creation of a Program Review and Resources Management Office to handle budgets, reports, and procurement policy; the establishment of separate program review and advanced mission groups in each division; and the corollary policy that planning, rather than being something imposed from the top, flowed upward from the centers, contractors, and advisory groups with which OSSA worked. In addition, with the adoption of a management information and control system in October 1965, OSSA had both an information system and a set of instructions that extended downward from the program division to the project offices.

This is a somewhat idealized version of how OSSA officials did their medium-range planning, or thought they did. Some ground rules, like that of pairing scientists and engineers, dated from the early 1 960s. Other features, such as the establishment of advanced mission groups in 1966, owed something to planning for the post-Apollo period, when OSSA and OMSF would both be [172] staking claims to a piece-a rather large piece-of the action. And many of these principles had to be imposed over the stiff resistance offered by Goddard and JPL. At Goddard the differences between Director Harry Goett and headquarters officials became so serious that he was dismissed in July 1965. Here the issue seems to have been Goett's reluctance to accept supervision by headquarters program managers or to allow OSSA representatives to attend meetings between Goddard officials and center contractors.75

At JPL the situation was made more complex by the laboratory's status as a contractor-operated facility that behaved, for most purposes, like a NASA center. The disagreements between JPL and NASA, which were intensified by the string of Ranger failures, were touched on in chapter 2. The "mutuality clause" was an irritant, but the underlying differences had more to do with program management than with anything else: OSSA, in particular, insisted on a tighter, more projectized organization than the one to which JPL had been accustomed.

To state the purpose of OSSA program planning is to emphasize both the difficulty of the task and the office's success in reducing it to almost manageable proportions: "the coupling of the undisciplined scientific activity into a highly disciplined engineering and administrative activity-the design, preparation, and conduct of a space mission."76


The Office of Manned Space Flight (OMSF)

The foregoing analysis of how OART and OSSA conducted their planning accentuates the distinctive features of OMSF planning.77 The obvious differences between OMSF and the other program offices pertain to size and the kinds of programs that OMSF managed. Indeed, OMSF did not plan in the sense that Newell's or Bisplinghoff's office did. There was no similar structure of large and small projects, some under way, others phasing down, and others moving from design to development. At OMSF planning was as much within as between programs. At the end of 1961 all three of OMSF's major programs-Mercury, Apollo, and Gemini-had been approved or were ongoing. No new program was approved or introduced as a budget line item until FY 1967. In this sense, there was very little to bridge the gap between current and future programs.

The size and share of NASA funds and manpower enjoyed by OMSF put the organization in a special category, one not reflected in the organization charts. Although superficially similar to other program offices-it too was headed by an Associate Administrator and had to submit PADs for each program-the sheer size of manned spaceflight programs made control by Webb, Seamans, or Dryden difficult, or at least incomplete, compared with the other program offices. OMSF was semiautonomous within the agency structure, while the OMSF centers were semiautonomous, almost baronies, within the OMSF framework. Mueller, as well as the center directors, had independent ties with Congress, the aerospace community, the press, and, through the Science and Technology Advisory [173] Committee, the scientific estate. The real key to understanding the OMSF program structure is the high priority of Apollo and its special claim to NASA resources. If the responsibility for the development of the Centaur launch vehicle and its RL-10 engines was transferred from Marshall to Lewis, it was because, as a former NASA official explained, Marshall officials had much more interest in the Saturn vehicles that they had designed than in the Centaur vehicle, and for that reason they were prepared to see Centaur canceled. If in October 1965 Webb decided that the Saturn V would be used to launch Voyager, it was in part because he wanted to retain the Marshall capability once Apollo phased down. Because of the overriding claims of the lunar landing, NASA management and Congress were prepared to accept, tolerate, or encourage practices they might have disapproved of elsewhere, like the creation of Bellcomm, the extensive use of support service contracts, and the construction of facilities-Marshall's static test stands, the crawler-transporter at Kennedy Space Center-that were peculiar to one program rather than to the continuing needs of the agency. At the same time, once the large programs began to phase down, NASA would face grave problems. What would happen to the contract and in-house work force assembled to carry the lunar landing program to completion? What would become of centers like Marshall that were organized around a few very large development projects? What would become of OMSF after the first lunar landing? Did the Apollo hardware have uses beyond the program for which it was developed, or was a launch vehicle like the Saturn V a technological dead end? Insofar as the other program offices rode on the coattails of the Apollo program, they too were involved in its fate. The size and sunk costs of Apollo were such that a serious miscalculation in OMSF might drag the agency down with it.

For a clear understanding of the nature and purpose of OMSF planning, it is necessary to concentrate on two areas: the management approach of George Mueller, who succeeded Brainerd Holmes as Associate Administrator for Manned Space Flight in September 1963, and OMSF's Advanced Mission Studies program from its inception shortly after Mueller's arrival to the fall of 1965 when NASA submitted Apollo Applications as a budget line item.

Under Mueller, who came to NASA from Space Technology Laboratories the manned program reached its "classical" phase. One might even argue that the most important administrative changes at OMSF occurred in a little more than one year, from September 1963 to the end of 1964. Mueller knew about Holmes troubles and from the beginning expressed a desire to work closely with top management.78 But, although he was more diplomatic than Holmes had been, he was no less bent on having his way. During his first year at NASA, he devised technical and management approaches that were to dominate OMSF planning until well into the 1970s: the organization of OMSF along program lines instead of having one office working on launch vehicles and another on spacecraft; the division of each program into discrete "work packages"; the concurrent development of the vehicle and ground support equipment; the greater use of redundant (duplicating) systems in the launch vehicle and spacecraft; the introduction of the [174] concept of the "open-ended" flight mission; 79 and the all-up mode of flight testing and, with it, the delivery of complete systems to the Cape. Mueller knew of the Air Force's experience in the all-up testing of Minuteman.80 Despite the initial resistance of his center directors, Mueller was able to sell the concept to them because the logic of the situation-the "end of the decade" deadline for the lunar landing, the knowledge that programs were slipping dangerously, the inefficiency of the current mode of flight testing-made some sort of change inescapable. NASA could not afford to repeat the Saturn I testing experience, in which four launches of the first stage were followed by launches of coupled second and first stages.81 The centers, Marshall in particular, had to take a bolder approach. Mueller believed that NASA no longer needed and certainly could not afford a step-by-step advance. For this reason he also decided to cancel all manned Saturn I flights and to man-rate only the Saturn IB and Saturn V launch vehicles.

Mueller was equally radical in handling headquarters and center operations. He restructured the Apollo program so that every functional element at the headquarters program office had a corresponding element in the center project office. The several systems comprising the Apollo spacecraft were defined through the subsystem level, and for each of the major systems he required that one person be responsible full-time for performance, costs, and schedules. In short, Mueller acted to stratify his organization to the lowest level. As Associate Administrator for Manned Space Flight, he defended his programs before top management and Congress, set and interpreted policy with his program managers and center directors, and set the terms on which long-range planning would proceed. To Apollo Program Manager Brig. Gen. Samuel Phillips, USAF, who had been the Minuteman manager and, more important, Vice Commander of the Air Force Ballistic Missile Division before being detailed to NASA in 1963, Mueller delegated responsibility for planning schedules, budgets, systems engineering, and other functions needed to carry out the program. Below Phillips' level were the center program offices, the prime contractors, and the intercenter coordination panels that knit the program together. What Mueller succeeded in creating was a "manned space family" with a stronger voice in policy making than any other program office. By meeting frequently with Apollo prime contractors (organized as the Apollo Executives Group), by intensive briefings of the House Science and Astronautics Committee (especially its Manned Space Flight Subcommittee) at Manned Space Flight centers, and by creating his own long-range planning group in conjunction with Bellcomm, Mueller developed lines of communication with external groups that could make or break the Manned Space Flight program. He also went far toward making the OMSF Management Council a more effective policy-making body. He reduced it to include himself and the three center directors, combined the monthly council meeting with the monthly program schedule review, and carried a resolution by which decisions could be deferred if they required extended discussion.82

Organizational changes at the centers both preceded and paralleled these reforms. Each case was a response to the logic of programs with long lead times, [175] geographic dispersal of prime contractors, and the need to integrate the flight hardware and the ground support equipment in one place. As shown in chapter 3, there was not even a consolidated Launch Operations Center until May 1963. The subsequent history of what became the Kennedy Space Center (KSC) bears witness to the importance attached by OMSF and top management to concentrating launch operations in one center. In December 1964 KSC absorbed the Florida Operations of the Manned Spacecraft Center (MSC), thereby assuming control of "all manned spacecraft upon arrival at the Center and total responsibility for manned space vehicles."83 In October 1965 KSC assumed responsibility for NASA unmanned launches as well, over the bitter protests of Goddard, which had previously managed them. At Marshall the August 1963 reorganization reflected the transition from a center whose roots were deep in the arsenal tradition to one whose principal function would be to manage large contracts for developing and producing complex launch vehicles. This reorganization established an Industrial Operations Division "as the . . . element responsible for multi-program management with Research and Development Operations providing technical support and management of in-house . . . projects."84 Concurrently, center management extended its use of support contracts on a one-contract-per-laboratory basis. At MSC in Houston there were two reorganizations in 1963. The first, in May, divided operations from developmental activities, with separate offices for preflight program management and for mission operations. The second, in November, gave Director Robert Gilruth and his deputy, James Elms, joint responsibility for four assistant directors and for the Apollo Spacecraft and Gemini program offices.85 All these changes-the consolidation of responsibility for launch operations, the separation of development from operations at the development centers, and the transition from organization by systems to organization by program-provided a foundation for the Manned Space Flight program that was sturdy enough to last the decade.

Yet Mueller's success in the medium term may serve to explain the comparative failure of OMSF in the long term. Although Mueller established an Advanced Missions Office under Edward Z. Gray as early as October 1963, it would be almost three years before NASA was sufficiently confident in its post-Apollo planning to present Apollo Applications (AAP) as a budget line item. Given the nature of ongoing OMSF programs, the reasons for the delay become understandable. First, there is an inherent tension between managing current programs and planning programs for the long term. If one office has responsibility for both current programs will generally take precedence over future programs because of the difficulty of planning as if there were no monetary constraints while carrying on in the real world.

Second, there were genuine differences between Mueller and his center directors over mission possibilities after the lunar landing. There were three options: missions to the near planets, such as Mars; expanded lunar exploration; and manned Earth-orbital laboratories.86 Despite its fantastic expense, Mueller was attracted by the idea of a manned expedition to Mars, while Gilruth at MSC and [176] von Braun at Marshall were convinced "that the major effort of NASA in the post-Apollo period must be directed toward full exploitation of near Earth space capability and, thus, the follow-on lunar activities, if they are required, can be carried out at a more scientifically efficient pace." 87 The difficulties with the space station concept are discussed earlier in this chapter. NASA could not commit itself prematurely to an Earth-orbital station (or anything else) for fear of bringing down on itself the wrath of Congress and-what was equally to be dreaded-the antagonism of DOD. Hence the hesitation in defining the sequel to Apollo. What began as a program of lunar exploration had shifted by 1967 to Earth-orbital operations.

Third, OMSF's planning exercises left the role of space science equivocal. At least up to 1966 science was something added on, rather than integral to, manned spaceflight. It sometimes seemed as if OMSF management based its planning for science on the existence of surplus hardware rather than on a felt need for science-based exploration. In any case, the development centers were not equipped to the same degree to do research. Marshall had magnificent engineering capabilities but few facilities for doing research, whether basic or applied. MSC, on the other hand, had a number of laboratories, like the Lunar Receiving Laboratory, that could accommodate research in the life sciences and lunar geology. The real problem facing OMSF was how to work with OSSA in any follow-on to Apollo, a problem that Seamans' "roles and missions" memorandum of 26 July 1966 failed to resolve.88 OMSF was given full responsibility for Apollo and AAP missions, while OSSA was to be responsible for the scientific content of NASA spaceflight programs. However, the approaches of the two offices were so different that cooperation would have to be the result, not the precondition, of any joint action. OSSA officials were privately sceptical of OMSF's ability to do long-term planning, they regretted the selection of Saturn V rather than the Saturn IB/Centaur as the Voyager launch vehicle, and they differed sharply with Mueller over the design of experiments and the ways in which flight hardware would be used. For these reasons, OMSF found it exceedingly difficult to submit a program that could be made to follow logically from programs actually in progress. All too often, OMSF planning seemed prompted by inquiries from Congress and the President, rather than by any conviction that "this is the way we ought to go." +++++++




To show that NASA had the means for successful medium-term planning, as was asserted at the beginning of this chapter, is not the same as showing how NASA did it. The strategies of project approval and review so far discussed were [177] useful in channeling ideas and proposals upward and the decisions of the program directors downward to the centers and project managers. But the project approval documents, the phased project planning directives, or the controls on advanced studies were not planning documents in any real sense. Rather, they set the terms on which planning took place. They were approaches by top management toward controlling manpower and resources throughout NASA. Webb and Seamans could not defend the agency budget until they knew what it was they were defending. Nor could they administer programs if each PAD required concurrences by two dozen officials before it reached their desks. To put it differently, the PAD, the project development plan, and similar documents were tools of management control; they recorded, at several removes, program decisions made elsewhere by other officials.

To understand how program planning really took place, one must examine each program office, its areas of responsibility, and how it related to the others. Each of the three substantive program offices had its own planning staff, as well as control divisions to review the planning of the several divisions comprising each office. But no program office developed a system to handle its R&D projects: not OMSF, because there was nothing in the Apollo program to dictate what the follow-on to the lunar landing would be; not OSSA, because programs like Viking and the High Energy Astronomical Observatory were not routine extensions of capabilities developed for earlier programs; and not OART, because of the difficulty of doing research that was at once detached from specific missions, yet somehow expected by top management to help define the technical parameters of NASA flight programs. What the offices could do was to refine management tools within the approval system represented by the PAD. Top management could do a great deal to eliminate paperwork, to make decisions explicit, and to get program directors to justify their decisions, year by year and project by project. On the evidence, it seems that they could do little to determine the technical content of specific programs; program planning was not just a reflection of choices made on the seventh floor of FOB-6.

The ways in which Voyager or Lunar Orbiter or the test facilities and laboratories at Houston and Marshall took shape owed much more to engineering than to administrative considerations. One need only recall such examples as the conviction of center directors that projects or facilities were technically ripe;89 the ambition of the directors to retain certain capabilities even when their reason for being was gone; the knowledge that many of the uncertainties dogging earlier programs (e.g., in the development of launch vehicles) no longer existed; and the existence, by the mid-1960s, of subsystems like the Surveyor soft lander or the pointing devices of the Orbiting Geophysical Observatories, that could be recombined for entirely different spacecraft systems. In short, the planning techniques described in this chapter represent the interplay of sophisticated technologies with the convictions of NASA management and line officials about the kinds of programs the agency ought to have. The decision to go to the Moon gay. NASA one kind of program, to which unmanned planetary probes and supporting [178] research and technology must contribute. The decision dating from 1967 that space must be treated as a resource to be exploited as well as a region to be explored gave NASA another program with other ends in view. Program planning was the point at which technical constraints, political pressures, and administrative solutions converged.


* Each NASA installation reported to a designated program office from 1963 to 1974, when they were all placed, for administrative purposes, under an Associate Administrator for Center Operations.

** Note the difference between budget formulation and programming. The former represents NASA facing outward to the executive branch and the Congress; the latter involves internal debate and review of agency goals. t Responsibility for preparing Seamans' program reviews and for publishing their results was assigned to the executive secretariat in December 1965. The Office of Programming's Facilities Standards Division was transferred to the Facilities Management Office (Office of Industry Affairs), also established in December 1965.

*** It is somewhat anticlimactic to note that the request for a fluid mechanics laboratory was eliminated in the December 1964 NASA-DOD Facilities Review. Here, the reasoning is more significant than the result.

**** For OSSA the review was an important matter. OSSA annually published a three-volume "prospectus" that could be used in short-term and intermediate planning but, under the terms of an openly expressed agreement between the office and Webb, was not designated a "plan."

***** In 1964 the categories were Earth orbital missions, lunar missions, planetary and interplanetary missions, launch vehicles, aeronautics, and general.

+ Pacing items were those events or components that, if delayed, would cause an equal delay in the entire program or in a planned launch.

++ The programming cycle for facilities construction was slightly different. Its four stages were conceptual study, preliminary design, final design, and project execution. Unlike R&D projects, facilities projects were fully funded; that is, all the funds for one year were budgeted at once. See NHB 7330.1, "Approval of Facility Projects" (July 1966), p. 8.

+++ Mariner 4 transmitted 8'A bits per second (BPS). When it becomes operational in 1983, the shuttle/spacelab will return somewhere between 250 000 and 50 million BPS. The cost per bit of data was reduced by 90 percent between 1965 and 1973. See Review of Tracking and Data Acquisition, pp. 103-104.

++++ Biotechnology and Human Research, Electronics and Control, Chemical Propulsion, Space Power a' Electrical Propulsion, Space Vehicles, Aeronautical Vehicles, and Research.

+++++ By an unfortunate coincidence the CARD report was submitted to Congress in March 1971, just before both I louses voted to discontinue funding the supersonic transport, the section on which was deleted to avoid the semblance of influencing the vote.

++++++ Two other constraints further diminished the board's effectiveness. It met infrequently (three to four times a year), and after 1964 it was supported exclusively by NASA funds.

+++++++ Thus the 1966 Apollo Program Development Plan stated that no advanced mission would be included "until such time as the advanced programs are defined and approved" (p. 17-1).