Computers in Spaceflight: The NASA
Experience
- - Chapter Eight -
- - Computers in mission control
-
-
- Manned mission control
computers
-
-
- [243] As with manned
spacecraft on-board computers, computer systems used in manned
mission control are more sophisticated and larger than those used
for unmanned missions. Even though unmanned satellites and space
probes pioneered the use of computers in mission control, the need
for quick response and redundancy, the inherent complexity of
manned spaceflight, and the rigors of the race to the moon forced
rapid improvements and innovations in systems used in manned
mission control so that they surpassed the older systems.
-
- The story of computers in manned mission
control is largely the story of a close and mutually beneficial
partnership between NASA and IBM. There are many instances of IBM
support of the space program, but in no other case have the
results been as directly applicable to its commercial product
line. When Project Vanguard and later NASA approached IBM with the
requirements for computers to do telemetry monitoring, trajectory
calculations, and commanding, IBM found a market for its largest
computers and a vehicle for developing ways of creating software
to control multiple programs executing [244] at once, capable
of accepting and handling asynchronous data, and of running
reliably in real time. These things the company was able to do
quite successfully, and the groups it assigned to the job
impressed their NASA counterparts. When asked about IBM's
performance in this field, one NASA manager said without
hesitation, "IBM is the best"1. The company maintained its lock on mission control
contracts through Gemini, Apollo, and the Shuttle. At each point,
some experienced personnel were transferred to other parts of the
company to share lessons learned. Several individuals contributed
to OS/360, the first multiprogramming system made commercially
available by IBM2. One became head of the personal computer
division3. NASA also used successful managers from mission
control work to help other programs. Howard W. "Bill" Tindall
started with Mercury and Gemini ground software and later made a
significant contribution to the quality of the Apollo on-board
software. No other software system developed under NASA contract
in the 1960s was as well thought out and executed as manned
mission control.
-
-
- Beginnings: Vanguard and
Mercury
-
-
- America's most spectacular contribution to
the International Geophysical Year (1957-1958) was the Vanguard
earth satellite, which, in ignorance of Russian preparations, was
thought to be the world's first orbiting spacecraft. In June of
1957, Project Vanguard established a Real-Time Computing Center
(RTCC) on Pennsylvania Avenue in Washington, D.C, consisting of an
IBM 704 computer4. The 40,000-instruction computer program developed
for Vanguard did data reduction and orbit
determination5. Orbit calculations needed to be done in real time
so that ground stations could be warned of the approach of the
satellite in time to listen for its signals and know where in
space the data came from. Thus, IBM gained early practical
training in the primary skills needed for mission control. In
1959, when NASA was ready to contract for a control center for
Project Mercury, IBM had experience it could point to in its
proposal, as well as an existing computer system about to be freed
from Vanguard work.
-
- NASA awarded Western Electric the overall
contract for the tracking and ground systems to be used in Project
Mercury on July 30, 19596. By late 1959, IBM received the subcontract for
computers and software7. Washington remained the site for the computer
system because it could benefit from centralized communications
already in existence8. NASA founded Goddard Space Flight Center the next
year, and since it was less than half an hour from downtown
Washington, the same advantages would accrue from locating
[245]
the computers there. Combined NASA and IBM teams used the old
computer system downtown until about November 1960, when the first
of Mercury's new 7090 mainframe computers was ready for use at
Goddard. James Stokes of NASA remembers the first time he and Bill
Tindall went to the new computer center, they had to cross a muddy
parking lot to where a "building" with plywood walls, window air
conditioners, and a canvas top confounded the IBM engineers who
were trying to keep the system up and running under field
conditions9. That structure evolved to become Building Three of
the new Space Flight Center and housed the system through the
Mercury era10.
-
- IBM's 7090 mainframe computer was the
heart of the Mercury control network. In 1959, the DOD issued a
challenge to the computer industry in the form of specifications
for a machine to handle data generated by the new Ballistic
Missile Early Warning System (BMEWS). The 7090 was IBM's response.
Essentially an improvement of the 700-series machines like the one
being used as a development machine for Mercury, the 7090 adapted
the new concept of I/O channels pioneered in the 709 and was so
large that it needed up to three small 1410 computers just to
control the input and output. The DOD's needs for BMEWS closely
paralleled those of Mercury in terms of data handling and
tracking. Thus, IBM was in a good position with its
hardware.
-
- To provide the reliability needed for
manned flights, the primary Mercury configuration included 7090s
operating in parallel, each receiving inputs, but with just one
permitted to transmit output. Called the Mission Operational
Computer and Dynamic Standby Computer, the names stuck through the
Apollo program. This was NASA's first redundant computer system.
Switching from the prime computer to the Dynamic Standby was by
manual switch, so it was a human decision11. During John Glenn's orbital mission, the prime
computer failed for 3 minutes, proving the need for active
standby12.
-
- Three other computers completed the
Mercury network. One was a 709 dedicated to continuously
predicting the impact points of missiles launched from Cape
Canaveral. It provided data needed by the range safety officer to
decide whether to abort a mission during the powered flight phase
and, if aborted, information about the landing site for the
recovery forces. Another 709 was at the Bermuda tracking station
with the same responsibilities as the pair at Goddard. In case of
a communications failure or double mainframe failure it would
become the prime mission computer. Lastly, a Burroughs-GE guidance
computer radio-guided the Atlas missile during ascent to
orbit13.
-
- Locating the computers near Washington
while placing the mission control personnel at Cape Canaveral led
to a communications problem that resulted in a unique solution. In
early digital computers, all input data went to memory by way of
the CPU. Large amounts of data that needed to be accepted in a
short time often backed up, waiting [246] for the central
processor to handle the flow. A solution is direct memory access,
which sends data directly from input devices into storage.
Transfers of large blocks of data directly to memory are conducted
through data channels, first used by IBM on its 709 and then on
the 7090. By using channels, processing could continue while I/O
occurred, increasing the overall throughput of the system.
Mercury's 7090s were four-channel systems. Normally, the
peripherals handling input and output would be connected to the
channels physically close to the machine, but the peripherals
(plotters and printers) driven by the Mercury computers would be
about 1,000 miles away in Florida. The solution was to replace
Channel F of the 7090 with an IBM 7281 I Data Communications
Channel, a device originally created for Mercury that has had
great impact on data processing14.
-
- Four subchannels divided the data handled
by the 7281 device. One was an input from the Burroughs-GE
guidance computer to provide data used in calculating the
trajectory during powered flight. The second input radar data for
trajectory and orbit determination. Two output subchannels drove
the displays in Cape Canaveral's Mercury Control Center and
locally at Goddard15.
-
- Connecting the two ends of the system was
a land line allowing transmission at 1,000 bits per
second16. Although this was a phenomenal rate for its time,
now a simple microcomputer routinely transmits at 1,200 bits per
second on nondedicated public telephone lines. The distance and
newness of the equipment occasionally caused problems. Once in a
while during a countdown, data such as the liftoff indicator,
which was a single bit, would get garbled and give erroneous
signals17. Most times such flags could be checked by other
sources of information, such as radar data contradicting the
lift-off message. Also, up to a 2-second time lag on the displays
in the control center was common18. During powered flight, such delays could be
significant; thus, the need for a separate impact prediction
computer and another machine in Bermuda.
-
- Software development for the Mercury
program was another area in which IBM advanced the state of the
art19. In the beginning of the computer era, operators
ran programs on computers one at a time. Each program was assigned
peripherals, loaded, run and, if errors occurred, stopped
individually. As machines grew larger and the number of users
increased, some way of making the process of loading and executing
programs more efficient was needed. The result was the concept of
"batch" processing, in which a set of several programs could be
loaded as a unit and executed in sequence. A special control
program called a "monitor" watched for errors and aborted programs
trapped in loops or that spun off into comers. To handle the many
jobs needed by manned spacecraft mission control, IBM set up a
method for programs to be interrupted and suspended while other
programs of greater priority ran, and then resumed when the
high-priority jobs [247] ended. Thus, a
number of programs could be loaded into the machine land run,
giving the illusion of simultaneous execution, even though only
one had the resources of the central processor at any one time.
This was the only way the processing of radar data, telemetry, and
spacecraft commands could be accomplished in the split seconds of
time allotted.
-
- IBM called the control program the Mercury
Monitor, but that is a misnomer in that it superceded the
capabilities of the known monitors of the time. It was event
driven, which means that certain flight events (lift-off,
sustainer engine cutoff, retrofire) formed the basis of the
starting times of certain processes20. The Mercury Programming System's primary functions
included capsule position determination, retrofire time
calculation, warning ground stations of the acquisition times, and
impact prediction after retrofire. Three separate groups of
processing programs, each stored on tape until needed, did these
functions at different times: launch, orbit, and
re-entry21. No matter which group of processors was loaded
into the machine, the Monitor frequently checked a table listing
processes waiting for input or output. Software placed entries in
the table when the Data Communications Channel signaled that data
were ready to be transferred22. The Monitor then handled the requests in priority
order. Within a processor group, such as orbit, a set of different
single-function processors would be defined. Thus, the entire
mission control program was highly modular, allowing easier
maintenance and change. In fact, some modules from the Vanguard
programs could be adapted to Mercury use.
-
- NASA wanted to take over the software as
soon as possible, so 15 or so civil service employees were
assigned to the IBM group while it was still in downtown
Washington. However, the Space Task Group retained direct control
over the software development, a somewhat frustrating situation
for NASA engineers much closer to the actual project and in a
better position to make suggestions23. At the time, NASA saw its role as that of a
knowledgeable user and recognized it lacked the expertise to
handle some of the calculating tasks involved. James Stokes, a
NASA engineer, admitted that "we didn't know enough to specify the
requirements" for the software24. IBM was not much better off and acquired its
expertise by contracting for the services of Dr. Paul Herget, then
director of the Cincinnati Observatory, who had privately
published a book on orbit determination in
194825.
-
- The Mercury network provided continuous
height, velocity, flight path angle, retrofire time, and impact
points. During powered flight the main computer center, the Cape
impact prediction computer, and the Bermuda tracking station
computer all would give GO/NO GO recommendations to the flight
director. After engine shutdown, the system needed to give GO/NO
GO data within 10 seconds, so that a safe recovery could be
effected if orbit had not been reached. During [248] the orbital
cruise, the astronaut could be given updated retrofire times each
time he came in contact with a ground station26.
-
- As the Mercury program wound down during
1962 and NASA began to accelerate preparations for Gemini and
Apollo, the Agency decided to place both the computers and flight
controllers for manned spaceflight mission control in a combined
center in Houston. Goddard staff proceeded under the assumption
that the new control center would not be ready in time for the
first Gemini flights, which turned out to be correct. Gemini I,
II, and III used Goddard as the prime computer center, with the
new system in Houston acting in an active backup role for flight
three. Beginning with flight four, the second manned mission,
Houston took over as prime, with Goddard acting as the backup
throughout the Gemini program27.
-
- For IBM and NASA, the development of the
Mercury control center and the network was highly profitable.
IBM's Mercury Monitor and Data Communications Channel were the
first of their types28. Future multitasking and priority interrupt
operating systems and control programs owed their origins to the
Monitor. Large central computers with widely scattered terminals,
such as airline reservation systems, have their basis in the
distant communications between Washington and a launch site in
Florida. For both organizations, the experience gained by staff
engineers and managers directly contributed to the success of
Gemini and Apollo.
-
-
- Second System: The
Gemini-Apollo RTCC
-
-
- Before the first Mercury orbital flight
was off the ground, NASA engineers working on mission control
tried to influence the design of the new center in Houston. Bill
Tindall, who worked on ground control for NASA from the beginning,
realized that locating the Space Task Group management at Langley
Research Center, the computers and programmers at Goddard, and the
flight controllers at Cape Canaveral created serious communication
and efficiency problems. In January 1962, he began a memo campaign
to consolidate all components at one site, obviously the new
Manned Spacecraft Center29. On February 28, just 8 days after John Glenn's
flight, Tindall made his strongest case in a detailed essay in
which he noted that IBM was the only company capable of creating
real-time software. He wanted the Ground Systems Project Office,
then in charge of oversight of the RTCC development, to allow
representatives from the Flight Operations Division to assist in
mission programming30. As the eventual users of the system, it made sense
to include them.
-
-
-
[249]
-
-
- FIGURE 8-1. IBM 7094s in the
Gemini Real Time Complex. (IBM photo)
-
-
-
- In April, the Western Development
Laboratories of Ford's subsidiary Philco Corporation began a study
of the requirements for the new mission control center. One aspect
of the study was to take numeric data and give it pictorial
content, making the jobs of the flight controllers less hectic but
necessitating much more sophisticated computer
equipment31. As Philco worked through the summer, NASA
Administrator James Webb announced on July 20 that there would be
an expanded replacement for Mercury Control. A "request for
proposal" was prepared, including concepts developed by Philco and
documented by them in their final facilities design released on
September 7.
-
- Philco's design was broad in scope,
covering physical facilities, information flow, displays,
reliability studies, computers, and even software standards.
Philco specified that modularity in program development was a
must, as it would ease maintenance and allow the use of "lower
caliber" people to code subprograms, leaving the real stars to do
the executive software32. This organizational rule became standard for large
program projects. Another specification required that the
probability of successful real-time computer support for a
336-hour mission be 0.9995. Also, due to rendezvous plans for
Gemini and the dual-spacecraft Apollo lunar missions, the center
had to control two spacecraft at one time. To meet the reliability
and processing goals, Philco examined existing computer systems
from [250] IBM, UNIVAC, and Control Data Corporation, as well
as its own Philco 211 and 212 computers, to determine what type
and how many would be needed. The calculations resulted in three
possible configurations: five IBM 7094s (the immediate successor
to the 7090, essentially a faster machine with a better operating
system, IBSYS); nine UNIVAC 1107s, IBM 7090s, or Philco 211s; or
four Philco 212s or CDC 3600s33. No matter which group would be chosen, it was
obvious that the complexity of the Gemini-Apollo Center would be
much higher than its two-computer predecessor. To help keep the
system as inexpensive and simple as possible, NASA specified to
potential bidders that off-the-shelf hardware was
essential.
-
- IBM moved quickly to respond to NASA's
call for proposals, delivering in September a 2-inch thick,
three-ring binder full of hardware and software bids, including a
detailed list of personnel they would commit to the project,
complete with employment histories. Although the company knew it
was the leading candidate (Tindall's endorsement could hardly have
escaped notice), it carefully matched the specifications, such as
clearly stating that modularization and unit testing would be the
norm in software development. One area in which they differed from
Philco's calculations was the number of machines needed. Perhaps
to keep the total bid low, IBM proposed a group of three 7094
computers. By splitting the software into a Mission Computer
Program and a Simulation Computer Program, one machine could run
the Mission Program as prime, another run it as the dynamic
backup, and the third run the simulation software to test the
other two, thus fulfilling requirements for redundancy and
preflight training and testing. This forced IBM to explain its way
around the 0.9995 reliability requirement. Three machines yielded
reliability of 0.9712, slightly over four being needed to achieve
the specification (thus, Philco's suggested number of five). IBM
made a case that the reliability figures were misleading and that
during so-called "mission-critical" phases the reliability of
three machines would exceed 0.999534.
-
- Eighteen companies bid on the RTCC,
including such powerful competitors as RCA, Lockheed, North
American Aviation, Computer Sciences Corporation, Hughes, TRW, and
ITT. NASA assigned Chistopher Kraft, the eventual chief user, to
chair the source board that studied the responses to the request
for proposal. Tindall served also, with James Stroup, John P.
Mayer, and Arthur Garrison, all of the Manned Spacecraft Center.
They awarded the original contract NAS 9-996, covering the Gemini
program, to IBM on October 15. Worth $36 million, it was to run
until the end of August 1965. Extended to December 1966, the total
cost came to $46 million35.
-
- With 6 weeks of preparation already done
before the contract award, IBM's core of engineers were ready for
business in Houston by October 28. J. E. Hamlin started as project
manager and interim [251] head of systems
engineering. He had 12 years of IBM experience, first as a
hardware engineer, later as a group leader for SAGE software, and
then manager for the Mercury system implementation. He had barely
started work at JPL's Deep Space Instrumentation Facility when the
RTCC contract came up. In his first report in January 1963, he was
able to announce the arrival of the first 7094 to be used for
software development. The computer and, later, two others were
installed in an interim facility on the Gulf Freeway. Each started
with 32K words of memory and 98K words of auxiliary core storage,
with a 1401 as a front end for input and
output36. On the negative side, Hamlin's early projection of
a peak staff of 161 had leaped to 228 by the time of the first
report. Eventually, 608 IBM people worked simultaneously on the
project, with 400 of them on software development. The magnitude
of the task was greatly underestimated both by IBM, which made the
bid, and NASA, which accepted it.
-
- Hardware needs grew along with the staff.
The original three machines moved from the interim center to
Building 30 at the Manned Spacecraft Center. Two more were added,
fulfilling Philcoís prophecy. The size and rating of the
machines was also increased to model 7094-IIs with 65,000 words of
main core storage and 524,000 words of additional core as a fast
auxiliary memory37. In the new configuration, one machine was the
Mission Operational Computer, the second, the Dynamic Standby
Computer, and the third, the Simulation Operations Computer as
before, with the two new ones used as the Ground System Simulation
Computer and a standby for future software development. The Ground
System simulator acted like the tracking network and other
ground-based parts of mission control to test software.
-
- IBM's original proposal projected
completion of the new system within 18 months. As time passed and
problems occurred, the plan altered to begin with support of the
Gemini VI mission. But slips in Gemini and steady progress on the
software enabled the use of the Center for passive parallel
computations during the Gemini II unmanned flight on December 9,
1964, just under 26 months after the contract award. On Gemini
III, the Houston control center did its final test as an active
backup. The results were so promising that from Gemini IV on,
mission control shifted from the Cape to Houston.
-
-
- Gemini Ground Software
Development
-
-
- NASA's requirements for the Gemini mission
control software resulted in one of the largest computer programs
in history. In addition to all the needs of the Mercury system,
Gemini's proposed rendezvous and orbit change operations caused a
near-exponential [252] increase in the
complexity of the trajectory and orbit determination software.
Placing a computer on board the spacecraft made it necessary to
parallel its computations as a backup and also necessary to devise
a way to use the ground computer system to update the Gemini
flight computer. Also, by the time the Gemini program matured, all
data on the tracking network were in digital form, and thus
computable, so the amount of data that passed through the ground
system increased further38.
-
- IBM reacted to the increased complexity in
several ways. Besides adding more manpower, the company enforced a
strict set of software development standards. These standards were
so successful that IBM adopted them companywide at a time when the
key commercial software systems that would carry the mainframe
line of computers into the 1970s were under
construction39. IBM approached the more difficult areas by
acquiring the services of specialist consultants and sponsored a
group of 10 scientists pursuing solutions to problems in orbital
mechanics. It included Paul Herget and some men from IBM's
Cambridge, Massachusetts "think tank"40.
-
- Key to the flight system was the Mission
Computer Program. It centered on a control program called the
Executive, which took over the functions of the Mercury Monitor.
Under the Executive, three main subprograms operated in sequence.
NETCHECK performed automatic tests of equipment and data flow
throughout the entire Manned Spaceflight Network, certifying it
ready for the launch of the spacecraft. It succeeded the CADFISS
(Computation and Data Flow Integrated Subsystem) program used in
Mercury41. ANALYZER did postflight data reduction. However,
the Mission Operations Program System remained the heart of the
software, responsible for all mission operations, such as
trajectory calculations, telemetry, spacecraft environment, backup
of the on-board computer, and rendezvous calculations. It divided
into a number of modules: Agena launch, Gemini launch, orbit,
trajectory determination, mission planning, telemetry, digital
commands, and re-entry, with several subprograms within each
section42. Each subprogram was highly sophisticated and very
powerful. The re-entry program, for example, could calculate
retrofire times 22 orbits in advance43.
-
- IBM found it impossible to complete this
complicated system with the tools used in the Mercury program. All
of the Mercury control software was in assembly language. Aside
from the assembler, software tools were minimal, reflecting the
state of the art circa 1960. Partly inspired by the difficulties
of developing a large system such as Mercury and SAGE and partly
to help commercial customers creating new software to match the
size and capabilities of the new line of mainframe computers, IBM
provided a much better set of tools with its 7094 series machines
than with earlier models. A fairly robust operating system, IBSYS,
could be used with the 7094, and a [253] modification of
it gave the Gemini software developers a decent editor and
compilation tools for high-level languages. Called the Compiler
Operating System, it included a combination FORTRAN/Mercury
compiler called GAC (for Gemini-Apollo Compiler), making it
possible to do some programming in FORTRAN. The Mercury compiler
contained all the functions of SOS, the Share Operating System,
which was IBM's standard system of the late 1950s and the
predecessor to IBSYS44.
-
- Besides using better tools, the Gemini
programmers tried to keep the architecture simple and changeable.
Using process control tables was an important design decision, as
they could be changed to fit different mission requirements with
some ease and without disturbing software in place. Their use
continued throughout the Apollo and Shuttle
programs45. The Executive was a further refinement of the
real-time control program first approached in Mercury. A
relatively spare 13,000 words in size, the Executive provided
priority-based multiprogramming. It could transfer needed data to
supervisory routines which, in turn, started
processes46. At the lowest level, contention between cyclic
processes and demand processes characterized the
RTCC47. Its obvious success helped form NASA's ideas of
what a good real-time operating system should be, which later
influenced the nature of the operating system on board the
Shuttle. NASA personnel were close to the Gemini-Apollo ground
system development, sometimes defining test cases and duplicating
programs to check whether requirements had been
met48.
-
- Even with better tools and a more powerful
computer, the processing needs of the mission control software
quickly exceeded the capacity of the 7094. IBM recognized that the
usual 32K memory of the machine would be insufficient when the
company prepared its proposal. Therefore, it suggested the use of
look-ahead buffering, which meant the next set of programs needed
during a mission would be loaded over the ones going out of
use49. The commercial practice of using tape storage for
waiting programs became impossible due to the size and speed
demands of the Gemini software. Thus, IBM added large core storage
(LCS) banks to the original machines. These banks, even though not
directly addressable, provided a higher speed secondary memory.
Tapes would be loaded to the large core and then transferred to
primary storage as needed50. An IBM engineer credited work in the use of LCS
and paging memory as being influential in the development of IBM's
version of virtual memory, the main software technological advance
of its fourth generation 370 series machines of the early
1970s51. As the Gemini program continued, NASA grew more
concerned about the ability of the 7094s to adequately support
Apollo, considering the expected greater complexity of the
navigation and systems problems. Kraft expressed concern that the
"real time" in the RTCC needed enhancement52. As the large core filled, loading [254] from tape for
certain programs became common practice. Once, when President
Lyndon B. Johnson was visiting the control center, the NASA
official leading the tour wanted to show the president a fancy
display. Not fully conversant with the software, he chose one that
ran off tape, so the entire party stood uncomfortably, minutes
seeming like hours, while the machine dutifully found the program
and put up the display53. NASA wanted a change.
-
- It was about this time that IBM announced
its System 360 series, a compatible line of several computers of
different sizes using a new multiprocessing operating system that
owed some of its characteristics to the company's NASA
experiences. NASA thought the upper level machines of the new
product, specifically the 360/75, would have sufficient power to
replace the 7094s for Apollo, although the LCS would have to be
continued due to the sheer size of the software. IBM's
announcement, as is usual with the company, preceded the shipping
dates of the machines by some months. It did not take long for
NASA to realize this and become impatient. Control Data
Corporation (CDC) released its 6600 line of computers in 1965 and
was actually shipping to customers as IBM failed to deliver.
Robert Seamans of NASA Headquarters suggested that the Manned
Spacecraft Center buy 6600s and let IBM retain the software
contract54. CDC's machine was actually faster and more
powerful than the 360. Later, CDC sued IBM, claiming its premature
360 announcement sought to hold the market and that claims made
for the 360 were not realized when the product actually came out.
IBM settled out of court with major concessions totaling nearly
$100 million, rushing delivery of the first 360 to Houston in time
to stave off the movement to other vendors. NASA announced the
conversion to the 360 in a news release dated August 3,
1966.
-
-
- Transition to Apollo
-
-
- Although the four remaining 7094 computers
continued to support flight operations through the first three
Apollo (unmanned) missions, IBM used the first replacement 360 to
begin software development for the Apollo lunar flights. As in
Gemini, two spacecraft, the command module (CM) and the lunar
excursion module (LEM), needed support, with five computers each
contributing to the overall system. Again, LCS provided added
memory. Unfortunately, all the software could not be moved
directly from one machine to the other due to the change in
operating systems. The new operating system for the series,
OS/360, had the multitasking capability developed during Mercury
days but operated primarily in batch mode. Many programs could be
entered, either by cards or through remote entry from terminals,
and run together, but not in real time. The priority-interrupt
[255]
provisions on the standard operating system were not sophisticated
enough to handle the sorts of processing Apollo needed. Beginning
in 1965, IBM modified the operating system into RTOS/360, the
real-time version55. Extensive use of modularization helped in the
transition. Separately compiled subprograms in FORTRAN, moreover,
could be moved to the 360 with relative ease, but the
assembler-based code had to be modified. This work continued for
nearly as long as it took to get the original system operating,
even though the architecture remained essentially intact.
-
- One problem would not go away: memory.
Each 360 had 1 million bytes of main memory, about four times the
size of 7094 main store. A further 4 million bytes of LCS was
added to each machine56. Even with some of the NETCHECK functions
transferred to the new twin 360s in the Goddard Real-Time System
(GRTS) and with seldom-used programs such as the radiation dosage
calculator and ground telescope pointing program permanently
located off-line, memory use rose to match the additional space.
Simply meeting the requirements for ascent filled the main
store57. At this time, NASA's Lynwood Dunseith, who had
worked on the ground software since Mercury, realized that the
worry over memory was causing programmers to develop
idiosyncratic, "tricky" code in an effort to save a few
words58. Dunseith knew the danger of that attitude, since
it made the programs even more complex than their absolute
complexity warranted. During the period he managed the software
development, he tried to reduce the dependence on such expedients.
It helped him that the 360s made it possible to develop
significant parts of the software in FORTRAN59. Although FORTRAN is not as easily readable as some
other procedural languages, it far exceeds 360 assembler in
understandability.
-
- As the Apollo system moved into the
operations phase the use of the Dynamic Standby Computer waned.
During the first manned flight, Apollo 7, the Mission Control
Center used a single computer for just under 181 hours of a
284-hour support period, which included countdown and postflight
operations60. During Apollo 10, a dual spacecraft flight with
LEM operations near the moon, the plan was to use the standby for
5 hours before a maneuver. Therefore, on only six occasions in an
8-day flight would there be two-computer support. To assist an
off-line standby in coming to the rescue of a failed primary,
operators made checkpoint tapes of current data every 1.5 hours. A
failure of the Mission Operations Computer occurred at 12:58 Zulu
on May 20, 1969. By 13:01, the standby had been brought up, using
a checkpoint tape made at 12:0061. No significant problems resulted, which is
actually a good summary of mission control operations throughout
the Apollo era, Skylab, and the Apollo-Soyuz Test Project.
-
-
-
[256]
-
-
- Figure 8-2. A display and control
panel in Mission Control for the Shuttle program (NASA
photos-80-6315)
-
-
-
- Reducing Mission Control:
Conversion to the Shuttle
-
-
- [257] During planning
for the Space Transportation System, with frequent launches and
multiple missions aloft expected, NASA studied ways to make the
spacecraft more autonomous and thus reduce the functions of
mission control. IBM again won the ground support contract, this
time over primary competitor Computer Sciences
Corporation62. Beginning in June 1974 and continuing into the
1980s, IBM worked on a new software system and mission-specific
changes63. Five System 370/168 mainframe computers make up
the Shuttle Data Processing Complex, the nominative successor to
the RTCC. Each has 8 million bytes of primary storage, and, being
virtual memory machines, do not need auxiliary storage of the LCS
type. Disk is used instead. Three computers are involved during
operations: One computer is the Mission machine, one, a Dynamic
Standby Computer, and a third, the Payload Operations Control
Computer. Now, in the late 1980s, these computers are being
replaced by IBM 3083 series machines, marking Mission Control's
fourth generation.
-
- By this time, quite experienced and fairly
knowledgeable about what would be needed, NASA and IBM approached
the ideal of thorough design before coding
began64. Reflecting the structure of the on-board software,
the requirements documents proceeded through different levels of
complexity. For the first time in ground software development, a
quality assurance group from outside the development organization
watched over software production65.
-
- The efficiency of the software developers
increased with the conversion from batch processing to interactive
processing. During Mercury, Gemini, and Apollo, programmers tested
new software in batch. With the main IBM Federal Systems Division
office nearly a mile from the actual computers housed in Building
30, it was necessary for a courier to pick up card decks, deliver
them to the Computing Center, and later return the results. In
this manner, an average of only 1.2 runs per programmer per
working day was possible. During 1974-1976, NASA commissioned a
study of batch versus interactive programming, in which
programmers using terminals could prepare jobs and run them from
the IBM building. Using IBM's Time-Sharing Option (TSO) system,
interactive processing clearly won out over batch in terms of
effectiveness. NASA accordingly ordered all Shuttle ground
software to be done under the time-sharing
system66.
-
- Regardless of the intentions of the
Shuttle managers to shrink the ground operations software, the
ground support functions provided by the Data Processing Complex
have not been reduced. Some parts of the original tasks are
handled more completely on-board, but the continued addition of
new equipment and concepts increased the size of the software. It
supports over 40 digital displays and 5,500 event [258] lights. The
total size of the system is 600,000 lines, roughly 26% larger than
Gemini and rivaling Apollo67. Shuttle missions are approaching the complexity
that a single computer can no longer support68. In addition, high between-flight change traffic
delayed the transition to the operations era. As late as 1983, 8%
of the total code changed each mission, keeping 185 programmers
busy. New and more powerful computers can always be added, but the
process of changing software must be automated or the expense of
labor intensive maintenance will continue to the end of the
Shuttle program.

