Computers in Spaceflight: The NASA Experience

- Chapter Nine -
- Making New Reality: Computers in Simulations and Image Processing -
 
 
Crew-training simulators
 
 
[270] NASA's requirements for flight simulators far exceeded the state of the art when the first astronaut crews reported for duty in 1959. Feeling obligated to prepare the astronauts for every possible contingency, NASA required hundreds of training hours in high fidelity simulators. Each crewman in the Mercury, Gemini, and Apollo programs spent one third or more of his total training time in simulators. Lunar landing crews used simulators more than half the time1.
 
Simulators must provide the astronaut trainee with as close an approximation of spaceflight as is possible on earth, without losing sight of the need to extensively practice procedures to respond to failures as well as nominal events. Requirements for realism increase the complexity of the simulation. For example, when an astronaut fires thrusters, the simulator must activate readouts and lights showing the thrusters firing, fuel reducing, velocity changes, and also show movement in the scene outside the cabin window. In a moving base [271] simulator, such as a simulator in which a spacecraft cabin is suspended on hydraulically moved pylons to enable it to tilt, physical motion must take place. Causing all these things to happen and coordinating them to happen simultaneously is the difficult task of the simulator designer2.
 
A manned spaceflight program always had more than one type of simulator. Usually there was a pair of full-function simulators, one fixed base and one moving base, used for procedures training and extended simulations. Often, part task trainers were needed for more difficult mission phases as well. NASA built a simulator for the last 200 feet of the landing of the lunar module. One part-task trainer exists to train astronauts in using the Shuttle on-board computer System and its software.
 
Mission simulators today are so dependent on computers that it has become necessary for proper design to think of it as large data processing complex that incidentally is driving displays and performing other functions in a crew trainer3. For the Shuttle Mission Simulator several dozen mainframe, mini, and on-board computers interconnect to create window scenes, change displays, move indicators, and light event lights. Reaching this level of computer involvement resulted from a steady evolution since the beginnings of manned spaceflight.
 
 
Project Mercury Mission Simulators
 
 
When the time came to design the Mercury flight simulators, experience with aircraft simulators and with those built for the X-15 rocket plane were all that were available. There is one critical difference between training needs for test pilots of aircraft and those of astronauts. Although flying experimental aircraft is always dangerous they are rarely taken to their projected limits on the first flight4 . Even the X-15 had a long series of buildup missions, first with a smaller engine, later incrementally increasing speed, then altitude, until a series of full out flights sent the plane to the edge of space. In rocket flight the spacecraft is pushed to the outer limits of stress and endurance from the instant of ignition. Its crews must be fully prepared for all contingencies before the first flight and continue to be prepared for every flight afterwards.
 
The primary simulator for the first manned spacecraft was the Mercury Procedures Simulator (MPS), of which two existed. One was at Langley Space Flight Center, and the other at the Mission Control Center at Cape Canaveral. Analog computers calculated the equations of motion for these simulators, providing signals for the cockpit displays5. In addition to this primary trainer, a centrifuge at the U.S. Naval Air Development Center in Johnsville, Pennsylvania, served as [272] a moving-base simulator. A Mercury capsule mock-up mounted at the end of the centrifuge arm provided ascent and entry training6. Additionally, Langley built a free-attitude trainer that simulated the attitude control capabilities of the spacecraft and two part-task trainers for retrofire and entry practice.
 
Analog computers commonly supported simulation in the 1950s and early 1960s. Having the advantage of great speed, the electronic analog computer fit well into the then analog world of the aircraft cockpit and its displays. By 1961, though, it became obvious that the simulation of a complete orbital mission would be impossible using only analog techniques7. The types and number of inputs and calculations stretched the capabilities of such machines so that when NASA defined requirements for Gemini simulators, digital computers dominated the design.
 
 
Computers in the Gemini Mission Simulators
 
 
Training crews for the more complicated Gemini spacecraft and its proportionately more complicated missions required the use of digital computers in the simulators. Aside from the tasks done during Mercury, such as ascent, attitude control, and entry, the Gemini project added rendezvous and controlled entries utilizing the spacecraft's greater maneuvering capabilities. At the Manned Spacecraft Center, NASA installed simulators to provide training for these maneuvers, including a moving-base simulator for formation flying and docking and a second moving-base simulator for launch, aborts, and entry. Besides these, two copies of the primary Gemini Mission Simulator, which had the same purpose as the Mercury Procedures Simulator, and the Johnsville centrifuge completed the list of Gemini trainers. One of the Mission Simulators was at Cape Canaveral; the other at Houston.
 
Gemini Mission Simulators used between 1963 and 1966 operated on a mix of analog and digital data and thus are a transition between the nearly all analog Mercury equipment and the nearly all digital Apollo and later equipment. Three DDP-224 digital computers dominated the data processing tasks in the Mission Simulator. Built by Computer Control Corporation, which was later absorbed by Honeywell Corporation, the three computers provided the simulator with display signals, a functional simulation of the activities of the onboard computer, and signals to control the scene generators8.
 
Functional simulation of various components was made easier by the use of digital computers. In a functional simulation, the actual component is not actually located in the simulator, its activities and outputs being created by software within the computer. Thus, in the Gemini Simulator, the on-board computer was not installed, but the [273] algorithms used in its programs were resident in the DDP computers, and when executed, activated computer displays such as the incremental velocity indicator just as on the real spacecraft.
 
Scene depiction in the Gemini era still depended on the use of television cameras and fake "spacescapes", as in aircraft simulators. Models or large photographs of the earth from space provided scenes that were picked up by a television camera on a moving mount. Signals from the computers moved the camera, thus changing the scene visible from the spacecraft "windows," actually CRTs. A planetarium type of projection was also used on one of the moving-base simulators at Johnson Space Center to project stars, horizon, and target vehicles.
 
Gemini simulations often included the Mission Control Center and worldwide tracking network. No commercially available computer could keep up with the data flowing to and from the network during these integrated simulations, so NASA asked the General Precision group of the Link Division of Singer Corporation to construct a special-purpose computer as an interface9. Singer held the contract for the simulators under the direction of prime contractor McDonnell-Douglas, which supplied cabin and instrumentation mock-ups. Fully functional simulators came on line at the Cape and Houston during 1964.
 
Moving-base simulation came into its own during the Gemini program. The docking simulator was in a large rectangular cube that permitted great freedom of motion in training crews for station keeping and docking. The dynamic crew procedures simulator that replicated launch, abort, rendezvous, tethered (with the Agena upper stage), and entry maneuvers and procedures suggested the feeling of acceleration at lift-off by tilting the spacecraft at a rate equal to the g buildup during launch from about a 45-degree angle to nearly horizontal to the floor. This resulted in a push on the astronaut's back, which increased from 0.707g to 1g. Engine cutoff and weightless flight could be suggested by returning the spacecraft to its original position, giving a feeling of maximum comfort to the crew10. Negative gs could be simulated by tilting the nose down, causing the astronauts to feel their weight on their shoulder harnesses.
 
Designing and using the Gemini simulators gave NASA a lot of experience in producing high fidelity simulations. Actual flight experiences from Mercury went into improving the Gemini simulators. Gemini rendezvous and maneuver experience helped make the Apollo simulations better. NASA adopted some of the Gemini equipment for Apollo. The use of Honeywell's DDP-224 computers continued, while moving-base simulators were adapted to Apollo use by changing the spacecraft mock-up and modifying existing techniques11. Still, the Apollo program requirements demanded a further increase in the amount of computer power.
 
 

[
274]
 
Figure 9-1A.
 
Figure 9-1A. The Apollo Command Module Mission Simulator. (NASA photo 108-KSC-67PC-178)

 
Figure 9-1B.
 
Figure 9-1B. An artist's conception of the Apollo Lunar Mission Simulator.

 
 
 
[275] Computers in the Apollo Mission Simulators
 
 
No less than 15 simulators trained crews during the Apollo Program. Three were the primary Command Module Simulators, with one at Houston and a pair at the Cape. Two were the primary Lunar Module Simulators, one at each site. At Houston, a Command Module Procedures Simulator trained crews just to rendezvous with the command module, as there was a Lunar Module Procedures Simulator for lunar module rendezvous and landing training. Gemini's Dynamic Crew Procedures Simulator became the same for Apollo. Additional moving-base simulators at the Manned Spacecraft Center were for lunar module formation flying and docking, and a centrifuge (to avoid trips to Johnsville). Langley Space Flight Center pioneered the research into the final 200 feet of lunar landing by suspending five sixths of a simulator's weight to give astronauts practice in controlling the lander in the gravity of the moon12. Another lunar landing simulator used a jet engine to support five sixths of its weight and permit free-flight landing training. That simulator required a simulator of its own to keep the crews from crashing it. Finally, a pair of partial-gravity simulators gave the astronauts the chance to walk in space suits while having five-sixths of their weight supported. Later in the program, Marshall Space Flight Center built a simulator for the lunar rover vehicle.
 
Among the plethora of simulators, use of the Command Module Simulators and Lunar Module Simulators nonetheless occupied 80% of the Apollo training time of 29,967 hours13. These simulators and their associated computer systems were crucial to the success of the program. The Apollo 13 emergency in April 1970, when there was an explosion in the service module on the way to the moon, demonstrated the high fidelity and flexibility of the simulators as all lunar module engine burns, separations, and maneuvers could be tested and ad hoc procedures developed as the crippled mission progressed.
 
In contrast to the procedures simulators, all of which were driven by a single mainframe computer, the Mission Simulators used networks of several computers14 Honeywell won a $4.2 million contract on July 21, 1966 to supply DDP-224 computers for the complexes15. Singer-Link was again the contractor for the simulators. Singer allocated three computers for the Command Module Mission Simulator and two for the Lunar Module Simulator. The sets of computers could communicate among themselves by using 8K words of common memory, where information needed throughout the simulation could be stored16. Later, a third and fourth computer were added, respectively, to the Lunar and Command Module Simulators. These [276] computers simulated the on-board computers. By the Apollo 10 flight a fifth computer, simulating the launch vehicle, completed the Command Module Simulator computer complex17. The two types of simulators and the Mission Control Center could do integrated simulations, thus requiring up to 10 digital computers to be working on one large problem simultaneously18.
 
Software became as important to the simulated world of Apollo as it was in the real world. Software development for the Apollo Mission Simulators required the efforts of 175 programmers at the peak, compared to 200 hardware persons19. Over 350,000 words of programs and data eventually ran in the two simulators. Using digital computers, trainers could return the crews to a certain point in a simulation and try again by simply recording the status of the computers and data on magnetic tape and reloading memory to match the state of the software at the time desired. This sort of flexibility made the training task much easier.
 
Early in the development of the Apollo simulators, a problem arose that would have had critical consequences if not solved. The importance of the on-board computer to the guidance and navigation of a moon-bound spacecraft was obvious. Crews interacted with the computer thousands of times in a typical mission; its keyboards contained the most used switches in the spacecraft. Initially, the Apollo Guidance Computer (AGC) for both the command module and the lunar module were simulated functionally, just like the rest of the spacecraft hardware20. This meant that the major components of the Apollo modules existed as software in a DDP-224 rather than in their physical form in the simulator.
 
Even so, functionally simulating the on-board computer soon proved to be nearly impossible. Mathematical models and algorithms for specific Apollo missions had to be sent to the simulator programmers from the Instrumentation Laboratory at MIT. Although Singer contracted for over 20 experienced IBM programmers, the development of functional simulations lagged21. The programmers had to take logic and create software for the DDP-224s that executed just like the software on the AGC. Essentially they coded programs already being coded for the real computer but in a different machine language. Warren J. North of the Computational Analysis Division at the Manned Spacecraft Center studied the process of creating the new software and found it took about 4 months to write the functional simulation. Since crews needed the software for training at least 6 months before the mission, and some buffer had to be allowed for last-minute glitches and their solutions, software designs for the AGC, developed at MIT, had to be available a full year before a flight, a very difficult schedule to meet at the time22. As a result of this study and the continued concern of the Apollo Spacecraft Project Office, W. B. Goeckler of the Systems Engineering Division of the program [277] asked James L. Raney of Computational Analysis to do a feasibility study of using a DDP-224 to simulate the AGC23. Goeckler thought it might be possible to make the Honeywell computer think it was the MIT computer and execute the MIT code, thus eliminating the need for rewriting the programs and solving the time problem.
 
When Raney joined Apollo in February of 1966, he faced a rather interesting question: Could a floating-point arithmetic, two's complement representation, 24-bit computer with accumulators and index registers run programs written for a fixed-point, one's complement representation, 16-bit machine that buried its registers in memory? Hardly likely, just about everybody thought-except Raney.
 
Instead of a functional simulation, a computer running another computer's code uses interpretive simulation techniques. It takes a single instruction from the other's program, executes it using as many instructions as necessary from its own repertoire, and then goes on to the next. Since the AGC had a unique interrupt structure and limited arithmetic capabilities (limited compared with the DDP-224), many Apollo instructions took multiple Honeywell instructions to get around the differences.
 
Raney suggested both hardware and software modifications to the DDP-224. He specified a switch to disable the machine's floating-point capability. Instructions were added to enable more efficient table searching and other operations that the AGC did well. To handle the different word sizes, Raney let the right-most 14 bits of the DDP word be the value of a corresponding AGC word. The left-most bit was always set to zero to indicate that it was an Apollo word, and the intervening bits matched the sign bit of the original word. Words that could not be translated (i.e., executed one for one), had to be executed by interpretive subroutines written for the purpose and stored in the lower part of the Honeywell memory. Raney figured that since the DDP had a 10-to-1 advantage in execution speed over the AGC, several instructions could be used to do one Apollo instruction without slowing down the program. He used the index registers in the Honeywell DDP to act as the Fixed Bank Register, which kept track of which core rope memory module the AGC was currently using, as well as the address of the next instruction. Finally, to store the AGC code, the flight program was put in the upper half of the 64K words of core, with the interpreter used in the AGC to execute its own instructions in an area in lower core. The contents of the AGC's 2K erasable memory and the 8K of common core addressable by all the simulator computers also was in lower core, along with Raney's interpretive subroutines24.
 
Despite Raney's careful evaluation of the situation and proposed solution, many Apollo project personnel opposed it, simply feeling it was unworkable25. In desperation, NASA approved the attempt at an interpretive simulator and bought the modified computers. In the end, [278] the simulation within a simulation was spectacularly successful. Even though Raney and his team took care to time the subroutines so that they matched execution of the actual Apollo code, the simulated computer was faster than the real article. Following the Apollo 9 earth-orbiting mission that tested the command module and lunar module rendezvous techniques, pilot Dave Scott complained that he had up to 12 seconds less time to react when the computer signaled for a maneuver to begin. This was adjusted for later flights.
 
Developing the interpretively simulated AGC had several impacts on the program. MIT could use the simulator as a field test of its code before flight. Since MIT used tape rather than core rope to send the programs to Houston and the Cape, errors discovered could be corrected and then the corrections tested in a "real" situation. Crews could react to the way the software worked with them. Also, the simulator cost just $4.6 million, compared to an estimated $18 million for functionally simulating the programs.
 
Actually, the Apollo Mission Simulators were the last of their type in that the analog environment of the spacecraft that dictated hybrid and functional simulations changed to a digital environment that lent itself to full digital simulations for the Shuttle program. Evolution to full digital simulation, including digital imaging of window scenes, meant even more dependence on digital computers. Making the Shuttle a more autonomous and thus more complex spacecraft contributed to a massive increase in the size of the computer systems needed to support simulations.
 
 
Full Digital Reality: Computers in the Shuttle Mission Simulators
 
 
The difficulty of producing a fully digital simulation of the Shuttle may be appreciated by considering the fact that when NASA issued the first request for proposals for the Mission Simulators, there was no response26. Singer, which by then had converted Precision Link to the Simulator Products Division, eventually responded with a plan for a detailed analysis of the simulation problems of the Shuttle. NASA had already decided that the extreme cost of developing Shuttle simulators would be moderated by acquiring fewer of them27. Shuttle program director Robert F. Thompson formed a committee in 1970 to monitor development of the simulators and involve the projected users, the Flight Crew Operations Division, and the Flight Operations Division, in its design28. Singer considered the requirements and suggested a large complex of mainframe computers functioning through limited task minicomputers to drive the simulator.
 
All Shuttle simulators are located at the Johnson Space Center. The fixed-base simulator replicates the four crew stations on the flight....
 
 

 


[
279]
 
Figure 9-2.
 
Figure 9-2. The fixed base Shuttle Simulator (upper center) with some of its electronics. (NASA photo S-81-27526)

 

....deck of the orbiter. It has window views through both aft windows and the overhead windows. Hosted by four Sperry Corporation UNIVAC 1100/40 mainframe computers, 15 Perkin-Elmer minicomputers (mostly 8/32s) provide digital images for the windows, interface with the on-board computers, and perform other functions, acting as fancy channel directors for the mainframes29. A motion-base simulator recreates the two forward crew stations, all forward window views, and the heads-up display used in landing. Also hosted by four 1100s, it has 11 minicomputers due to the lesser digital image requirements. The fixed-base simulator not only has to display proper images of the earth and the cargo bay but it also must image the remote manipulator arm and any payloads, thus requiring the power of five of the 8/32s. Supplementing the two primary Mission Simulators is the Shuttle Procedures Simulator. Also called the "Spare Parts Simulator," it was often cannibalized to keep the more critical Mission Simulators running30. In the early 1980s it was scrapped, and a Guidance and Navigation Simulator was built out of its remaining parts. It is used for some part-task training.

 
Singer quickly decided that the Shuttle's on-board computers could not be interpretively simulated, as the AGC was31. IBM's AP-101 machines used on the spacecraft were roughly as fast as the [280] UNIVAC computers, eliminating the time advantage the DDP machines had over the AGC32. Functional simulation of five computers working in concert was also out of the question. Therefore, each simulator had five computers, just as in the real spacecraft, and NASA bought two more as spares. During the course of the program, however, the computers began failing. With training schedules calling for simulation runs of 16 hours a day plus maintenance and reloading, several computers reached 30,000 hours of operation, far greater than the operational life of the flight version. Roughly 12 or 13 are actually available at any one time, with the two primary simulators always kept at a full complement of five, and the Spare Parts Simulator using the rest33. The mass memory unit (MMU) of the on-board computer system, the magnetic tape drive that stores the software, is functionally simulated. It proved impossible to keep the actual mass memories running long enough to be cost effective. Designed for only a few minutes of use in each flight, they fell apart under the demands of the simulators. A disk drive controlled by an IBM Series/1 processor replaced the MMU, with delays built in to make it load as slowly as a tape would.
 
Software for the Shuttle Mission Simulators is based on a 20-millisecond cycle controlled by a special real-time clock that sends a signal to all participating computer systems34. This is about the only way the large number of computers can be kept in step. The operating system for the UNIVAC machines is a commercial version that is no longer supported by Sperry, so NASA has had to specifically contract for maintenance on the system to avoid having to change the rest of the software to match a new one35. Singer wrote the real-time operating system used on the Perkin-Elmer machines36. Despite the large number of programmers on Singer's Shuttle Simulator payroll (200+ of 611 people), it subcontracts with Perkin-Elmer for some software, creating a situation where the developers are removed from NASA managers by another layer of management, which has resulted in unsatisfactory products37. In 1980, NASA's Robert Ernull, with years of experience in the on-board software division, was named head of the simulator division to help clear up problems with the complex simulators. He tried to reduce the throughput required of the computers to 70% of the total capability to allow for changes. This did not help what he thought was a second major problem-lack of memory. Memories were so full any modifications caused a crisis38.
 
Aside from the more traditional Mission Simulators, NASA is beginning to use microcomputers to replace the expensive part-task trainers of the past. A system called Regency provides a programmable 64 by 64 spot touch screen. Detailed graphics of switches and indicators can be displayed, and also component schematics, so that trainees can communicate with the teaching software by touching the screen in the appropriate place. The teaching software is based on [281] techniques developed for the PLATO system at the University of Illinois. Increased use of microcomputers and other small computers for more generalized training will come as the space program enters the Space Station era. Simulating large spacecraft will be financially impossible, but simulation of critical crew stations using software and graphics for flexibility will be possible. Given the present direction, it appears that some sort of generic trainer with its characteristics controlled by software will be the mainstay of the training program, replacing the large computer complexes of the past and present.
 
 

Figure 9-3.
 
Figure 9-3. One of the instructional screens of the Regency system used in training.

 
 
[282] While flight simulators are the glory of the simulation business, engineering simulations help make spaceflight possible. Many times highly innovative systems proved themselves in extremely accurate simulations. One example is the control moment gyro system used in attitude control of Skylab. A large simulator constructed at the Marshall Space Flight Center gave engineers valuable data about the behavior and feasibility of the system, which was understood by few aside from its inventor. Also at Marshall was a simulation of the Shuttle's main engines. These first computer-controlled rocket motors run much hotter and closer to destruction than any predecessors. Software for the engine controllers can be tested and certified in the simulator. At Johnson Space Center and the Rockwell plant in Downey, California are full-scale engineering simulators of the entire Shuttle orbiter. Early in the program, engineers led by Kenneth Mansfield at Johnson used these simulators to work out preliminary concepts, flight techniques, and procedures development using functional simulations (no flight hardware). After the installation of the actual hardware, changes to the individual hardware and software components could be checked for integration with the remainder of the spacecraft in those simulators. Thus, engineering simulators provide engineers with help in requirements analysis, prototyping, verification of concepts, and integration testing.
 
Simulation of components involved in rocket flight began in the late 1930s with the German development group at Peenemunde, where attitude control systems were simulated. In 1939, a one-axis mechanical simulator of the A-4 rocket's motion about its center of gravity provided valuable control data without the expenditure of test vehicles39. That device led conceptually to a more robust electronic analog simulation of the control system designed by Helmut Hoelzer and built under his direction by Otto Hirschler. Included in that simulator was an analog device to correct for the vehicle's lateral drift while in flight. Completed in 1941, the simulator was the most advanced analog computer built to that time40.
 
Following World War II, the Peenemunde group brought the concepts of simulation to the United States. Hoelzer became head of the Computation Laboratory at the Army Ballistic Missile Agency research site in Huntsville, Alabama. When NASA absorbed the Agency's Huntsville facilities in 1959, Hoelzer continued his work and gathered a powerful set of digital and analog computation devices at the Marshall Space Flight Center. So much simulation work needed to be done that Hoelzer developed a simulation system consisting of a set of general-purpose digital, analog, and hybrid computers that several projects could use. Usually consisting of a large analog device and supporting digital minicomputers, the hybrids modeled booster [283] flight characteristics and tasks such as payload loading, space telescope pointing, attitude control problems, circuit design, and mission support41. One system, the SMK-23, modeled moving vehicles such as the lunar rover, providing television window views inside a closed control cockpit42. Besides this central facility, Marshall Space Flight Center also developed two large stand-alone simulators for special complex problems: the Skylab Attitude and Pointing Control Simulator and the Hardware Simulation Laboratory used to model the space Shuttle main engines.
 
 
Skylab Simulators
 
 
Prior to Skylab, the primary method of attitude control in a spacecraft was the use of reaction control system jets that burned liquid fuels. With a mission profile of up to a year's worth of occupancy and experimentation, Skylab could hardly carry enough fuel to maneuver its bulk for that length of time. An alternative solution was a control moment gyro (CMG) system that was very innovative and complex. A redundant digital computer system provided commands to the system in orbit. To study the operation of the complete system, including the computer and its attendant software, required the construction of a complete laboratory dedicated to the task.
 
Three rooms of the Astrionics Laboratory at Marshall were set aside for the simulator. In the Black Room sat the hardware that simulated the motion of the space station. The Green Room held the control devices and some of the computing equipment, with the remainder in the adjoining computer room. Primary computer for the simulator was a hybrid consisting of an XDS Sigma V digital computer and Comcor Ci5000 and Ci550 analog computers. These drove the simulation of the orbital workshop and interfaced with the ATMDC which flew on the actual spacecraft. A SEL 840 digital computer sent digital commands to the on-board computer43.
 
Originally, the use of the simulator concentrated on mission planning and hardware and software verification tasks. Engineers expected to operate it less than half the working hours of a normal week. However, due to the severe hardware failures on the actual mission, the simulator reverted to 24 hours a day, 7 days a week operation. First the micrometeoroid shield and solar panels were damaged during ascent. This meant that the workshop had to be oriented in ways not set out in the requirements. For nearly 2 weeks, while Marshall prepared tools and techniques to effect repairs with the first crew aloft, the simulator tested attitude control maneuvers that would keep the workshop from excessive internal heating. Later failures, especially the loss of a CMG, were successfully modeled and solutions devised. As in the Apollo 13 flight, ground simulation of actual flight damage led to safe alternatives.
 
 
Space Shuttle Main Engine Simulator
 
 
[284] The main engine of the space Shuttle is another complicated device that needs its own simulator. The Hardware Simulation Laboratory is the primary site for verifying the design of the main engines, testing the engine controller software, preparing for hardware changes such as new controllers, and modeling failures such as faulty valves and sensors that caused engine shutdowns on the pad and in flight during the Shuttle program. Begun in the early 1970s, by 1975 the engine simulator became operational. At the heart of the first version of the Laboratory were two Ci5000 analog computers and a SEL 840 MP digital computer. The engine, actuators, and sensors are simulated with the hybrid stem. Actual engine control computers are mounted in the simulator44.

 


Figure 9-4.
 
Figure 9-4. A collage depicting the Hardware Simulation Laboratory at the Marshall Space Center used for testing the Shuttle Main Engine Controllers. (NASA photo 331594)

 
 
[285] Since Marshall was responsible for all the booster components of the Shuttle, it developed other devices that modeled those components in the largest of the active engineering simulators, the Shuttle Avionics Integration Laboratory.
 
 
SAIL: Fully Operational Shuttle Skeleton
 
 
The Shuttle Avionics Integration Laboratory, or SAIL, one of the largest engineering simulators ever built, sits in a big bay at the Johnson Space Center. A fully functioning skeleton of the Shuttle orbiter, it contains all avionics components used on the real orbiter, totaling nearly 1,750 black boxes weighing 6,000 pounds45. In fact, they are placed in exactly the same positions as in the actual spacecraft so that components can be certified and any changes made to the avionics can be tested. Also, software for a particular flight can be run to check for errors. Through the first six flights of the Shuttle program, the SAIL accounted for 241 errors found in the primary software and 196 errors in the backup software. For the first flight, SAIL operated for 644 shifts and since then has averaged 80 shifts in support of a mission. Just short of 350 contractors and NASA personnel manned the Lab in its operational phase.
 
Planning and construction of the SAIL began in 1968, when the Shuttle Engineering Simulator first began operations. This simulator, still functioning after many modifications 15 years later, replicates a cockpit. Scene generators for one forward window and both rear and overhead windows, as well as four SEL minicomputers and a Control Data Corporation Cyber 74 mainframe, drive the simulator. Preliminary work on this simulator gave experience that contributed to the SAIL, which started in 197246.
 
Until January of 1983, the SAIL itself consisted of a guidance and navigation test station; the Shuttle Test Station, which is the skeletal orbiter; the Marshall Mated Element System, which simulates the propulsion system; a ground standard interface unit, which sends commands and acquires data from the SAIL for display; and a subset of the Launch Processing System. Since the avionics system is the only real hardware in the orbiter mock-up, the orbital maneuvering engines, reaction control system, main propulsion, and other non-avionics boxes must be simulated by computer software or analog devices. To preserve the exact signal timing, these simulators must, in some cases, be located farther from the spacecraft skeleton than the real equipment. The forward reaction control jets simulation boxes, for example, are over 10 meters from the spacecraft nose. Since Marshall contributed 55 racks of electronics, and Kennedy Space Center sent the Launch Processing System subset, each center can use the SAIL to verify software written for equipment under their....
 
 

 


[
286]
 
Figure 9-5.
 
Figure 9-5. Astronaut John O. Creighton in the cockpit of the Shuttle Avionics Integration Laboratory. (NASA photo S-79-39162)

 
 
 

....development control, such as the engine controllers from Marshall and the interfaces and selected test software from Kennedy47.

SAIL operators can monitor tests from display control modules connected to the interface unit. The consoles have color monitors and individual processors used for fault detection. Aside from validating engineering changes and software, the SAIL is used for validating tests to be carried out later on the spacecraft while it is being prepared for flight48.

 
An extensive hybrid computing center drove the SAIL and its attendant simulators during its first decade. A pair of EAI 8800 analog computers simulated the landing gear, runway, and braking. A pair of 7800s represented the aerosurfaces and rate gyros. These analog computers were replaced with a pair of Gould SEL 32/8780 digital minicomputers in 1983. Other SELs provide a digital autopilot simulation, equations of motion, radar altimeter, and other nonavionics functions49. Separate computers generate the window scenes. These are so much better than those done in the Shuttle Mission Simulator, especially in regard to the Remote Manipulator System, that crews prefer to use the Shuttle Engineering simulator in the SAIL for training when a mission requiring use of the arm is coming up50.
 
[287] Since the Shuttle was the First manned spacecraft to fly without unmanned development flights, the SAIL's importance cannot he minimized. By essentially replicating the entire spacecraft and its operations exactly as the spacecraft currently exists, the SAIL provides NASA and the astronaut crews confidence in the hardware and software for each mission. In its role, SAIL is the ultimate engineering simulator.
 
 

Figure 9-6.
 
Figure 9-6. Image processing makes possible scenes from alien worlds such as this panorama of the Martian surface. (JPL P-17982)
 

link to previous pagelink to indexlink to next page