Taking Pluto’s Portrait

Even the mighty Hubble Space Telescope has to strain to see this tiny, distant planet.

Images of Pluto taken with Hubble’s Faint Object Camera in June and July 1994 were enhanced and processed to make a global map of the planet at different longitudes. The tile pattern is an artifact of the processing. (Alan Stern (Southwest Research Institute), Marc Buie (Lowell Observatory), NASA and ESA)
Air & Space Magazine | Subscribe

(Continued from page 1)

Other teams had already used the Hubble to observe Pluto, including a German group that photographed the planet after the telescope was repaired in 1993. But Stern hoped—somewhat audaciously—to extract enough information to map its surface. By taking photographs at four different longitudes as Pluto slowly turned on its axis over the course of two six-day rotations, he would gain almost full global coverage. The proposal also called for recording images in two types of electromagnetic radiation: visible light, at a wavelength of 410 nanometers, and ultraviolet light, at a wavelength of 280 nanometers. The shorter-wavelength, higher-frequency ultraviolet radiation emitted by Pluto would provide an image with finer resolution and more information about the surface. (To understand the relationship between wavelength and resolution, think of a pair of calipers: A pair with a finer scale of markings can measure more detail than a pair with coarser markings.) Measuring in the UV was a clever way to get twice the resolution from the telescope’s Faint Object Camera (FOC), which would better differentiate variations in Pluto’s icy surface. And by comparing the UV and visible-light maps, Stern and Buie would have a powerful tool for modeling the composition of the surface.

It was this proposal that the STScl accepted back in 1988, shortly before the telescope was launched. Because of the subsequent problem with the Hubble’s mirror, though, all observations requiring high resolution—the Pluto pictures among them—had to be put off. Worse yet, after the telescope was repaired, anyone who wanted to use it had to enter into a whole new competition for viewing time. To improve his chances of being accepted, Stern dropped his original plan to use both the telescope’s Wide Field Planetary Camera and the FOC, settling for just the FOC. His proposal was selected again, and in the summer of 1994, Observation #5330, “High Resolution Mapping of Pluto’s Albedo Distribution,” finally came off as planned.

The telescope took the pictures on four days in late June and early July—a set of ultraviolet and visible observations on each day, three exposures for each observation, for a total of 24 pictures. STScl then did a standard computerized “pipeline processing” (which includes factoring out the handful of known dead spots in Hubble’s field of view), placed the data in the permanent archive, and shipped off copies to Stern. One of the tapes got lost in the mail (its images having traveled across the solar system!), so the STScI had to send another copy.

Once the image files were loaded onto computers at Southwest Research, the work could begin in earnest. The raw data looked promising—it was obvious that some squares in the checkerboard-like images were bright and some were dark. But it was way too early to start jumping to conclusions about whether these were real features.

Stern and Brian Flynn, a postdoctoral scientist working with him at Southwest Research, first did a few simple reality checks, like making sure the same features showed up in different exposures taken on the same day, or checking to see if a spot that appeared on one day had moved on the next day’s image, when the planet had rotated a quarter-turn. If it did, Stern and Flynn would have more confidence that they were seeing something real.

Still, the best anyone could hope for at this resolution was to see gross provinces of light and dark, which was precisely the point of the experiment. As Buie would explain two years later at the NASA press conference, “You can’t do geology in these images,”  meaning you could forget about distinguishing mountain ranges from smooth plains.

The albedo variations did tell you something, though. The light areas are thought to be regions where fresh nitrogen “snow” has fallen out of the planet’s thin atmosphere. The darker areas are what passes for bare ground on Pluto—methane ice darkened by the effects of the scant sunlight that reaches the planet. Even though these pictures ultimately revealed Pluto to be the most “contrasty” object in the solar system (with the exception of Earth), the variation in brightness only amounts to the difference between clean Colorado snow and dirty Boston snow.

Before nailing down where these provinces were on a map, though, there was still a lot of hard image processing ahead. Stern compares it to “twiddling knobs”—adjusting the picture on a TV set, but with a dozen or more variables to tune exactly right. With each step lurked the prospect of making a mistake. Stern still remembers with a “sinking feeling” the time he published a result that turned out to be dead wrong. He had made an observation using a brand-new telescope, and the processing software the observatory sent along with his data had a bug in it. That experience taught him that you can’t be too cautious. ‘‘Take it from a guy who’s been wrong, “ he says. “You have to be wrong once or twice to appreciate that.”

For an object larger than Pluto, the data processing would have been fairly straightforward, and it wouldn’t have mattered much if a few pixels here and there were out of alignment. But in this case, a few pixels were all they had, and the computer processing was everything. “The data have so much subtlety to them,” says Stern, “that you really have to get in there and have the bits almost talking to you to really be sure of what you’re seeing.”

The first task was to sharpen each of the images as much as possible, using a computerized process known as deconvolution. But it didn’t go very well. The technique involves applying a mathematical formula characterizing a particular instrument’s known degree of blurring (every telescope has some) to reconstruct what a perfect image would have looked like. Deconvolution, in effect, corrals the blurred light from an object back into a nice, tight circle.

Tags

Comment on this Story

comments powered by Disqus