SP-474 Voyager 1 and 2 Atlas of Six Saturnian Satellites

 

APPENDIX C

Image Processing

 

[153] The Voyager pictures of the Saturnian satellites are digital images; they are transmitted from the spacecraft as a stream of numbers, each representing a shade of gray of a specific point in the picture. When these numbers are placed in their proper location in an array of rows and columns, and each number replaced by a tone of the correct gray shade, an image is formed that is visible to the human observer (fig. C-1). These operations are performed with computers and computer-driven film-writing devices. The technique is called "digital image processing." (See Moik, 1980.) Although the methods were originally developed by space scientists for processing lunar pictures, they are now used in medicine for enhancing X-ray photographs, by resource scientists for enhancing aerial and spacecraft views of Earth, and by a variety of other specialists.

Two levels of digital processing are commonly used for planetary mapping. Level 1 is intended to remove all image artifacts, including noise and shading. Level 2 processing changes the geometric shape of the image to match an appropriate map projection. The level 1 and 2 images are carefully preserved on magnetic tape without contrast enhancement. Contrast enhancements are then applied to film images as needed, without modifying the digital tapes. Each Voyager picture contains 800 rows of 800 picture elements, called "pixels" (fig. C-2). Each pixel is assigned a density number (DN) by the spacecraft imaging system, according to the brightness of an image projected on the vidicon tube by the Voyager camera. (See app. A.) The Voyager imaging system is capable of discriminating and transmitting 256 shades of gray. The DNs in a Voyager picture thus range from O (black) to 255 (white). The camera can be commanded to take pictures at shutter speeds ranging from 1/1Oth to 1/200th of a second (table A-1).


Figure C-1. The principle of the digital image.

Figure C-1. The principle of the digital image. A stream of numbers (density numbers (DNs)) representing shades of gray is received from the spacecraft by tracking stations on Earth. Each DN is placed in its correct location in an array of rows and columns ("raster") by a computer and is then replaced by its correct gray shade and printed on film by a computer-driven film-writing device. (Illustration by Susan L. Davis)


Figure C-2. Spacecraft digital imaging system.

Figure C-2. Spacecraft digital imaging system. Picture elements, or pixels are visible on the enlarged inset. (Illustration by Patricia M. Bridges).

 

[154] A variety of factors degrade any spacecraft television picture. Electronic fluctuations within the spacecraft cause a variation in the sensitivity of the system and make the DN that should represent black, or no signal, larger than 0. Spurious DN values are often injected into an image by fluctuations in electrical currents or magnetic fields in or near the camera. Segments of a string of DN values may be lost during transmission. Although lens distortions are virtually negligible in the Voyager cameras, geometric distortions are introduced into the pictures by the electronic recording and transmitting systems. All of these problems can be corrected, or significantly reduced, by digital image processing techniques. For example, pictures of empty space will contain many DN values other than O in a pattern that does not vary significantly from one frame to the next (fig. C-3). Because it is known that all these values should equal 0, subtracting DN values in black-sky pictures from the DN values in the same rows and columns on pictures of a planet or satellite results in a picture that has been corrected for radiometric distortions.

Spurious DN values (bit errors and dropped lines) can be detected by computer programs that examine the rate of change in DN values that are adjacent to each other in an image. Abrupt changes do not usually occur in images of natural surfaces, but are likely to result from data anomalies. They are corrected by replacing the DN values of pixels that deviate significantly from their surroundings with the average DN value of neighboring pixels.

 


Figure C-3. A picture of black, featureless space used for in-flight calibration of Voyager cameras.

Figure C-3. A picture of black, featureless space used for in-flight calibration of Voyager cameras. Nonblack tones (exaggerated in this illustration for emphasis) are artifacts of the spacecraft imaging system. The DNs in this image are substracted from raw images to produce radiometraically corrected pictures. Picno 0998J1-025.


 

[155] The purpose of level 1 processing is to restore the pictures to the quality that would have been produced by a "perfect" camera transmission and film recording system. Figures C-4 and C-5 illustrate this processing phase. Figure C-4 is a "raw," or uncorrected, Voyager picture of Dione. Figure C-5 is the same picture after radiometric correction, reseau and blemish removal, and contrast enhancement. This atlas is intended to show the preliminary cartographic products of the Voyager mission, and hence contains only level 1 images.

Final map compilations are being done with level 2, or geometrically processed, images. This processing entails removal of camera distortions and transformation of the images to appropriate map projections. Distortion corrections are based on the known positions of calibration dots (reseau marks) in an image. These marks are etched in the photosensitive surface on the vidicon faceplate so that they appear in every picture. The positions of the reseau marks were measured on Earth before the spacecraft was launched. When the pictures were received on Earth, the reseau marks were located in each image and their positions recorded. The reseau pattern is clearly visible in figures C-3 and C-4. Although the marks themselves are removed during the level 1 phase by processing similar to that used for bit errors and dropped lines, their recorded positions can still be used to control geometric transformations. In correcting distortions, each image is stretched like a rubber sheet in such a way as to restore the reseau marks to their original, correct locations.

Each geometric correction of a digital image results in a slight loss in resolution. For this reason the correction of camera distortions is done....

 


Figure C-4. An untreated Voyager picture of Dione. Picno 0272S1+000.

Figure C-4. An untreated Voyager picture of Dione. Picno 0272S1+000.


[
156]

Figure C-5. Voyager picture of Dione with bit errors, dropped lines, and reseau marks removed and contrast enhanced (level 1 processing). Picno 0272S1+000.

Figure C-5. Voyager picture of Dione with bit errors, dropped lines, and reseau marks removed and contrast enhanced (level 1 processing). Picno 0272S1+000.

 

....simultaneously with transformation to a map projection, so that the actual geometric correction is performed only once.

Transformation to map projections is based on orientation matrices derived by analytical photogrammetry (Davies and Katayama, 1983a,b). This process involves a complex mathematical analysis of the positions of images of selected features (control points) on several spacecraft television pictures. The final results of this calculation include precise latitudes and longitudes for the control points and a set of linear equations that precisely define the orientation of the spacecraft camera with respect to the satellite at the time each picture was taken. These equations are used to change the shape of mapping pictures so that they will fit specified projections. Figure C-6 is a geometrically corrected, map-projected version of the Dione image of figure C-5.

A digital television image, with its 256 shades of gray, frequently contains far more information than can be discriminated by the human eye or shown on film. Before it is printed on film (with or without geometric correction), a decision must be made as to which aspect of the image to emphasize. If it contains both very dark and very light areas, its contrast must be modified in the computer so that detail in both light and dark areas will be visible.

 


[
157]

Figure C-6. Geometrically corrected Mercator projection of figure C-5 (level 2 processing).

Figure C-6. Geometrically corrected Mercator projection of figure C-5 (level 2 processing). High-pass filter and contrast stretch have been applied to this image.

 

Two kinds of contrast manipulation are commonly used either singly or in combination on digital images. The first is a simple contrast change ("stretch") applied to the whole image. This is analogous to the photographic processing operation in which photographic papers of different contrast grades are used.

Contrast-stretch parameters for a given image are selected on the basis of a histogram of DN values in the image (figure C-7). This histogram is a graph that shows the number of pixels of each DN value in the image. The lowest DN recorded for a significant number of pixels in the original image is set to O for the stretched image. The highest DN recorded for a significant number of pixels in the original image is set to 255 DN for the stretched image. Intermediate DNs are reset in proportion to the newly defined high and low values. Figures C-4 and C-5 illustrate the effect of contrast stretching.

The second method of contrast enhancement, called high-pass filtration, changes local contrasts throughout the picture in such a way that small....

 


[
158]

Figure C-7. Histograms of DNs in picno 0272S1+000. (a) The calibration image of figure C-3. (b) The raw, untreated version. Note that all DN values are clustered in a 37-DN range, including the DNs in (a), representing black sky. This is the histogram for the image of figure C-5. (c) The contrast-stretched version of the level 1 image in which the black-sky frame has been substracted and the DNs in the raw image with values of 37 or greater have been converted to 255. Intermediate values are modified in proportion. This is the histogram for the image of figure C-8.

 

[159] .....details, bright or dark, are adjusted to the same average gray tone. The effect of this computer technique is analogous to that of electronic dodging devices available on some photographic printers and enlargers, or the photographic technique of "unsharp masking." The small details are emphasized while broader tonal variations in the picture are subdued (figs. C-5 and C-8).

The following procedures is used to make high-pass filtered versions of digital images (fig. C-9):

(1) The filter size is selected on the basis of the size of image features to be emphasized. This size is defined in terms of pixel dimensions: a filter might be 3 pixels wide by 3 pixels high, or 51 pixels wide by 31 pixels high. The filter dimensions are odd numbers so that there is always a pixel in the exact center of the filter.

(2) The high-pass filter computer program computes the average DN value of all pixels within the filter beginning at its first location in the upper left corner of the picture. The actual DN of the pixel in the upper left corner of the filter box is substracted from this average. Because there is about a 50-percent chance that this new filtered....

 


Figure C-8. High-pass-filtered version of figure C-5.

Figure C-8. High-pass-filtered version of figure C-5. A 101-x-101-pixel filter (300 x 300 km on Dione) was used on this image, thus subduing tonal variations covering areas larger than 300 x 300 km. Note also that midtones are uniform across the image, even near the terminator where the unfiltered image of figure C-5 is much darker.


[
160]

56

37

32

10

15

48

40

53

44

38

18

15

30

32

60

55

43

10

14

28

32

75

62

60

61

8

16

25

70

65

54

48

32

18

20

65

71

66

55

46

38

41

51

58

59

54

49

41

43

(a)

x

x

x

x

x

x

x

x

46

32

22

21

29

x

x

54

43

30

22

23

x

x

60

38

23

26

21

x

x

65

60

48

36

27

x

x

62

59

51

42

36

x

x

x

x

x

x

x

x

(b)

x

x

x

x

x

x

x

x

125

133

123

121

128

x

x

128

127

107

119

132

x

x

129

149

165

109

122

x

x

127

121

127

123

118

x

x

136

134

131

131

129

x

x

x

x

x

x

x

x

(c)

Figure C-9. The arithmetic of high-pass filtration. (a) An array of DNs representing a hypothetical raw image seven pixels wide and seven pixels long. (b) 3-X-3-pixel low-pass filter image of (a). The average DN value of the 3 X 3 block of pixels in the upper left corner of the image was calculated, and the result used as the value of the pixel in the center of the 3-X-3-pixel block. The block was then moved one pixel to the right and the process repeated until the entire image, except for the edges, was filled with the average values for the blocks. Because the pixels at the edge of the image are not at the center of any blocks, they are either lost or treated separately. The effect of this process is to defocus the image. The larger the filter, the greater the defocussing effect. (c) The 3-X-3-pixel high-pass filter image of (a). Each value in the low-pass filter picture (b) is subtracted from its corresponding value in the raw image (a) and added to 127, so that negative values do not result and so that the midtone of the image will have an average value of 127, in the middle of the 255-DN tonal range.

 

.....DN will have a negative value, a DN value representing a medium gray (127) is then added to it. The filter is then moved one pixel to the right, and the process is repeated. When filtered values for the first row of pixels have been computed, the filter box is moved down one row and filtered values for the second row of pixels are computed. The process is repeated until new values have been computed for each pixel in the image. The filters are affected by frame edges and abrupt contrast changes, so special program steps must be devised to deal with these problems. Several high-passfilter algorithms are in use, but they produce similar results.

Filter sizes for the Voyager pictures in this atlas were selected primarily on the basis of image scale so that landforms would be uniformly emphasized. In other words, rather than using, for example, a 101-X-101-pixel filter for all pictures, a 21-X-21-km filter might be used. To achieve this uniformity, a picture taken when Voyager was 22500 km from a satellite might be treated with a 101-X-101-pixel high-pass filter, whereas a picture taken from five times farther away (112000 km) might have a 21-X-21 pixel filter applied. No single filter size, however, was appropriate for the full range of available image resolutions. It was, therefore, necessary to group the images as to high, medium, and low resolution and select the best filter for each group by trial and error.


previousindexnext