The Voyager pictures of the Saturnian satellites are digital images; they are transmitted from the spacecraft as a stream of numbers, each representing a shade of gray of a specific point in the picture. When these numbers are placed in their proper location in an array of rows and columns, and each number replaced by a tone of the correct gray shade, an image is formed that is visible to the human observer (fig. C-1). These operations are performed with computers and computer-driven film-writing devices. The technique is called "digital image processing." (See Moik, 1980.) Although the methods were originally developed by space scientists for processing lunar pictures, they are now used in medicine for enhancing X-ray photographs, by resource scientists for enhancing aerial and spacecraft views of Earth, and by a variety of other specialists.
Two levels of digital processing are commonly used for planetary mapping. Level 1 is intended to remove all image artifacts, including noise and shading. Level 2 processing changes the geometric shape of the image to match an appropriate map projection. The level 1 and 2 images are carefully preserved on magnetic tape without contrast enhancement. Contrast enhancements are then applied to film images as needed, without modifying the digital tapes. Each Voyager picture contains 800 rows of 800 picture elements, called "pixels" (fig. C-2). Each pixel is assigned a density number (DN) by the spacecraft imaging system, according to the brightness of an image projected on the vidicon tube by the Voyager camera. (See app. A.) The Voyager imaging system is capable of discriminating and transmitting 256 shades of gray. The DNs in a Voyager picture thus range from O (black) to 255 (white). The camera can be commanded to take pictures at shutter speeds ranging from 1/1Oth to 1/200th of a second (table A-1).
 A variety of factors degrade any spacecraft television picture. Electronic fluctuations within the spacecraft cause a variation in the sensitivity of the system and make the DN that should represent black, or no signal, larger than 0. Spurious DN values are often injected into an image by fluctuations in electrical currents or magnetic fields in or near the camera. Segments of a string of DN values may be lost during transmission. Although lens distortions are virtually negligible in the Voyager cameras, geometric distortions are introduced into the pictures by the electronic recording and transmitting systems. All of these problems can be corrected, or significantly reduced, by digital image processing techniques. For example, pictures of empty space will contain many DN values other than O in a pattern that does not vary significantly from one frame to the next (fig. C-3). Because it is known that all these values should equal 0, subtracting DN values in black-sky pictures from the DN values in the same rows and columns on pictures of a planet or satellite results in a picture that has been corrected for radiometric distortions.
Spurious DN values (bit errors and dropped lines) can be detected by computer programs that examine the rate of change in DN values that are adjacent to each other in an image. Abrupt changes do not usually occur in images of natural surfaces, but are likely to result from data anomalies. They are corrected by replacing the DN values of pixels that deviate significantly from their surroundings with the average DN value of neighboring pixels.
 The purpose of level 1 processing is to restore the pictures to the quality that would have been produced by a "perfect" camera transmission and film recording system. Figures C-4 and C-5 illustrate this processing phase. Figure C-4 is a "raw," or uncorrected, Voyager picture of Dione. Figure C-5 is the same picture after radiometric correction, reseau and blemish removal, and contrast enhancement. This atlas is intended to show the preliminary cartographic products of the Voyager mission, and hence contains only level 1 images.
Final map compilations are being done with level 2, or geometrically processed, images. This processing entails removal of camera distortions and transformation of the images to appropriate map projections. Distortion corrections are based on the known positions of calibration dots (reseau marks) in an image. These marks are etched in the photosensitive surface on the vidicon faceplate so that they appear in every picture. The positions of the reseau marks were measured on Earth before the spacecraft was launched. When the pictures were received on Earth, the reseau marks were located in each image and their positions recorded. The reseau pattern is clearly visible in figures C-3 and C-4. Although the marks themselves are removed during the level 1 phase by processing similar to that used for bit errors and dropped lines, their recorded positions can still be used to control geometric transformations. In correcting distortions, each image is stretched like a rubber sheet in such a way as to restore the reseau marks to their original, correct locations.
Each geometric correction of a digital image results in a slight loss in resolution. For this reason the correction of camera distortions is done....
....simultaneously with transformation to a map projection, so that the actual geometric correction is performed only once.
Transformation to map projections is based on orientation matrices derived by analytical photogrammetry (Davies and Katayama, 1983a,b). This process involves a complex mathematical analysis of the positions of images of selected features (control points) on several spacecraft television pictures. The final results of this calculation include precise latitudes and longitudes for the control points and a set of linear equations that precisely define the orientation of the spacecraft camera with respect to the satellite at the time each picture was taken. These equations are used to change the shape of mapping pictures so that they will fit specified projections. Figure C-6 is a geometrically corrected, map-projected version of the Dione image of figure C-5.
A digital television image, with its 256 shades of gray, frequently contains far more information than can be discriminated by the human eye or shown on film. Before it is printed on film (with or without geometric correction), a decision must be made as to which aspect of the image to emphasize. If it contains both very dark and very light areas, its contrast must be modified in the computer so that detail in both light and dark areas will be visible.
Two kinds of contrast manipulation are commonly used either singly or in combination on digital images. The first is a simple contrast change ("stretch") applied to the whole image. This is analogous to the photographic processing operation in which photographic papers of different contrast grades are used.
Contrast-stretch parameters for a given image are selected on the basis of a histogram of DN values in the image (figure C-7). This histogram is a graph that shows the number of pixels of each DN value in the image. The lowest DN recorded for a significant number of pixels in the original image is set to O for the stretched image. The highest DN recorded for a significant number of pixels in the original image is set to 255 DN for the stretched image. Intermediate DNs are reset in proportion to the newly defined high and low values. Figures C-4 and C-5 illustrate the effect of contrast stretching.
The second method of contrast enhancement, called high-pass filtration, changes local contrasts throughout the picture in such a way that small....
 .....details, bright or dark, are adjusted to the same average gray tone. The effect of this computer technique is analogous to that of electronic dodging devices available on some photographic printers and enlargers, or the photographic technique of "unsharp masking." The small details are emphasized while broader tonal variations in the picture are subdued (figs. C-5 and C-8).
The following procedures is used to make high-pass filtered versions of digital images (fig. C-9):
(1) The filter size is selected on the basis of the size of image features to be emphasized. This size is defined in terms of pixel dimensions: a filter might be 3 pixels wide by 3 pixels high, or 51 pixels wide by 31 pixels high. The filter dimensions are odd numbers so that there is always a pixel in the exact center of the filter.
(2) The high-pass filter computer program computes the average DN value of all pixels within the filter beginning at its first location in the upper left corner of the picture. The actual DN of the pixel in the upper left corner of the filter box is substracted from this average. Because there is about a 50-percent chance that this new filtered....
.....DN will have a negative value, a DN value representing a medium gray (127) is then added to it. The filter is then moved one pixel to the right, and the process is repeated. When filtered values for the first row of pixels have been computed, the filter box is moved down one row and filtered values for the second row of pixels are computed. The process is repeated until new values have been computed for each pixel in the image. The filters are affected by frame edges and abrupt contrast changes, so special program steps must be devised to deal with these problems. Several high-passfilter algorithms are in use, but they produce similar results.
Filter sizes for the Voyager pictures in this atlas were selected primarily on the basis of image scale so that landforms would be uniformly emphasized. In other words, rather than using, for example, a 101-X-101-pixel filter for all pictures, a 21-X-21-km filter might be used. To achieve this uniformity, a picture taken when Voyager was 22500 km from a satellite might be treated with a 101-X-101-pixel high-pass filter, whereas a picture taken from five times farther away (112000 km) might have a 21-X-21 pixel filter applied. No single filter size, however, was appropriate for the full range of available image resolutions. It was, therefore, necessary to group the images as to high, medium, and low resolution and select the best filter for each group by trial and error.