The Astronaut Question | Space | Air & Space Magazine
Current Issue
October 2014 magazine cover
Subscribe

Save 47% off the cover price!

When John Glenn (here looking through a training device) became the first American to orbit Earth, a yaw thruster caused attitude control problems, so he flew the last leg manually. Half a century later, spaceflight still requires both automation and human skill. (NASA JSC)

The Astronaut Question

How long will humans remain better than robots at exploration?

Air & Space Magazine | Subscribe

For 50 years, the heart of the American space program has been manned flight. When the space shuttle program ended, it was a historic moment that triggered a wave of commentaries and interviews on the “man in space” theme. Former astronauts like Neil Armstrong and NASA veterans like Christopher Kraft Jr. complained that by shelving the shuttle before a NASA-operated manned spacecraft was on hand to service the International Space Station, America had fallen down on the job.

From This Story

“Man in Space”? How about “Robot in Space”? In spaceflight, robots are on the rise. When it comes to post-shuttle missions carrying humans to orbit, machines do most of the flying, and increasingly, all of it. Presently, the only spacecraft that ISS-bound astronauts can travel in are the Russians’ Soyuz capsules, and those flights are entirely controlled by the digital Kurs system, which employs radar signals for homing. Kurs also guides the current Russian Progress cargo carriers. When Kurs works as planned, which is most of the time, the Soyuz and Progress vehicles are able to zoom all the way from launch at Baikonur, Kazakhstan, to docking with the ISS without any human intervention. Soyuz passengers have only to open the hatch and float out. In case of malfunction, they can use manual emergency control.

The European Space Agency’s Automated Transfer Vehicle, which after three missions also has proven able to fly from Earth to the station docking port without human intervention, uses an automation approach different from the Russians’, relying on relative-GPS signals and videometers. Japan’s H-2 Transfer Vehicle and the newly tested Dragon spacecraft by Space Exploration Technologies (SpaceX) are “near-dockers”—they automatically arrive close enough for the ISS’s Canadarm2 to reach out and grab them for docking.

Besides SpaceX, NASA is funding three U.S. companies—Boeing, Sierra Nevada, and Blue Origin—for the second stage of the Commercial Crew Development program, which is ultimately supposed to produce an astronaut-carrying craft. Former astronaut Linda Godwin has said the post-shuttle fleet is grouping into “rental cars” and “taxis.” “Rental car” is analogous to a privately built spacecraft that NASA leases from the owner, with agency astronauts taking the controls. A space taxi would be one carrying NASA mission specialists or other astro-passengers, with the craft directed by the owner-operator’s crew members.

When comparing spacecraft-driving to car-driving, one more analogy is needed: the automated, driverless car. During the 2010 VisLab Intercontinental Autonomous Challenge, four electric automatic automobiles got themselves from Italy to China. After more than a quarter-million miles on the road, Google’s Self-Driving Car now has a license to roam Nevada, albeit with an engineer behind the wheel, who, says Google, hardly ever needs to take control.

The next wave of manned orbital craft now being built promise to be equally automated when flown. Like the H-2, the Dragon spacecraft from SpaceX will park within arm’s length of the station. On a routine, no-glitches mission in which Boeing’s CST-100 flies to the ISS, the astronauts will leave all the driving to robots, which will use a navigation system evolved from Orbital Express, its unmanned satellite-rendezvous mission for the Defense Advanced Research Projects Agency. In that 2007 experiment, one satellite intelligently chased down another, latched on, and exchanged fuel and components, all without human control.

At launch and during ascent, the CST-100’s computer starts with mission plans and an updated knowledge of its own position, obtained via the inertial guidance system. As it closes to within about 20 miles of the ISS’s estimated position, it will trigger a bank of visual and infrared cameras, plus a laser-ranging device called LIDAR. Says Michael Burghardt, who leads a Boeing engineering team developing manned spacecraft, “Inertial guidance tells where the CST is, and sensors tell where the station is.” As the range decreases, the cameras provide a stream of increasingly detailed data about the two objects’ positions, because the CST’s computer compares the camera images to a stored three-dimensional digital model of the station.

All this hands-off flying may seem a startling departure from the pilot-focused days of NASA, but in its later years, the standard shuttle flight profile leaned on machines. The pilot and commander on the flight deck operated thrusters and other controls only during two brief intervals: One was during the last half-mile in docking with the ISS, and the other was the last 10 miles of the flight back to Earth. And at those times their actions were guided by sensors and computers, in the manner of an airline pilot who steers her 737 to landing but under the guidance of a localizer/GPS system’s instructions. That’s barring mishaps and malfunctions, which we’ll get to.

Why are robotic spacecraft on the rise? Compared with the early days of spacefaring, missions to orbit today are highly routinized affairs, and the predictability of the job favors robot brains. In the case of a computer glitch, there are many ways to save the situation and even, if need be, undertake an orbital rescue.

The long answer is more interesting, because it suggests that a new kind of robot-human partnership might shape up when we have the means to bootstrap ourselves out of low Earth orbit—when the missions will change from transporting the known to exploring the unknown.

So look at the “Who takes the wheel?” question along three dimensions, as first set out by Scott Murray of NASA’s Johnson Space Center in Houston. They are Authority, Automation, and Autonomy.

Authority. Without the ability to exert control—using thrusters, gyroscopes, or futuristic gadgets—a spacecraft is little more than a hypersonic cannonball. If external influences like gravity, initial velocity, and air friction govern the flight, the other two considerations, automation and autonomy, are irrelevant. Sputnik was a spacecraft devoid of authority; it had no technology to change its trajectory. Mercury capsules gave the pilot the authority to change how it was pointed, as well as the option to trigger the retro-rockets needed for reentry—a good thing, since that part of the automation had problems.

But who, or what, employs the authority? That brings us to the second consideration. Automation: how much control humans exercise from moment to moment, and how much they hand off to machines. During the last minutes of each lunar-landing approach, Apollo astronauts used a good deal of manual control, but if some problem had required the approach be abandoned, they could have pushed a button to trigger a fully automated abort mode, boosting the lunar module back to rendezvous with the command module.

One part of spaceflight that is virtually always automated, and has been even from the earliest days: ascent from Earth to orbit. “When you’re riding a rocket, the thrust is so great, it’s hard to use manual control,” says Mike Burghardt. “Any small little mistake would easily send you off course.” But other aspects of the journey, such as the reentry of a winged craft or a lifting body like Sierra Nevada’s Dream Chaser, offer more choices about who drives. Exploratory missions venturing to the moon and beyond will pose many more choices, depending on the goals.

“[Design] decisions about the degree of human control—whether to automate something or not—are done on a case-by-case basis,” says John Goodman, a rendezvous and navigation engineer with United Space Alliance in Houston. The stereotype is that “engineers want to computerize everything” while “astronauts always want manual controls,” but it’s not always so. After Apollo 8, that crew suggested that the task of slowly rotating the spacecraft during the journey—necessary to prevent overheating on the sunlit side—be automated, so all three crew members could sleep at the same time. And upon his return, Apollo 11 command module pilot Michael Collins advised NASA to automate the tedious job requiring him to sit at the guidance-system keyboard and enter hundreds of digits, all error-free.

The third factor is Autonomy. How much freedom of action does the decision-maker in space have? If a spacecraft is going to a distant destination and is likely to encounter unforeseeable situations that will need a fast response, mission planners will dial up the onboard autonomy.

Planners have been giving communication-related autonomy to deep-space probes for years, so if the radio link to Earth is lost, the probe knows to lock down certain equipment in “safe” mode while firing up special routines to restore the link. Autonomy on any given spacecraft may be adjustable—called “sliding autonomy” in the trade—depending on tasks and events.

Questions of how much autonomy apply equally to manned craft. In 1965, the Gemini 8 capsule, having undocked from the Agena target vehicle, went into an uncommanded spin. Neil Armstrong and David Scott quickly identified the problem as a valve malfunction. Each second saw the rotational speed increase, and the control system couldn’t stop it. Adding to the crisis, the craft was out of radio touch and the astronauts were unable to get advice and telemetry analysis from mission control in Houston.

In true Right Stuff fashion, the crew rose to the emergency. With mere seconds before the spin would have made them black out, the astronauts fired the retro-rockets to stop the spin. According to mission rules, once the reaction control system had been used, an abort was required, so the pair headed back to Earth.

Story Musgrave, veteran of six shuttle missions and the only person to have flown in all five orbiters, dismisses the idea that manned spaceflight should be an independent venture. “A manned spacecraft isn’t autonomous,” he says. “Astronauts receive their orders from mission control. Pilots are part of a system, and the system can save your ass.” That system is not just hardware; it’s also rules, tests, training, and simulator runs. When things go wrong, astronauts benefit from the bigger system on Earth to help with analyzing and fixing the problem. In a manner of speaking, engineers go into space too: shoulder to shoulder with the astronauts, embodied in the machine they built.

There is a fourth “A”: Awareness, as in “situational awareness.” It’s almost a sixth sense, the ability to detect that something is out of whack before the alarms go off. A trained, aware astronaut can pick up on problems never anticipated on the ground. Rising out of some ancient instinct for survival, it draws on our vision, smell, hearing, and touch, as well as on our training.

“Computers are really good at making a lot of decisions quickly and dealing with anticipated events and acting precisely with those events,” says Ken Bowersox, former shuttle astronaut and now head of SpaceX’s Astronaut Safety and Mission Assurance Department. “Humans act less precisely. The certainty of an absolutely correct outcome can be lower than with computers. But because humans care about the outcome, they tend to make decisions optimized for survival.”

Bowersox recalls a pertinent situation on shuttle mission STS-61, the first time astronauts serviced the Hubble Space Telescope. As the airlock was depressurizing in preparation for a spacewalk, the crew noticed that the Hubble’s solar panels were shaking. “The standard rule is that if something unexpected just happened after you took an action and you’re not sure why, you undo what you just did,” says Bowersox. “So we shut the [depressurization] valve and the panel quit moving. We hadn’t anticipated this: It was a no-momentum, no-thrust valve, but it was discharging under a blanket and acted like a gas jet on the solar array. A computer wouldn’t have known the array was moving at all.” These kind of glitches, not registered by the computer at the scene, have wrecked or stunted robotic missions. One such mission was Japan’s Hayabusa: The mothership released its asteroid-hopping mini-lander at the wrong moment, and it drifted off into space.

“Humans, even without complete programming, can deal with the unexpected,” Bowersox says. “Humans are cheap and fast to program, handling things that suddenly come up, where there’s no software.”

The human ability to be situation-aware can grow into something even more remarkable if there’s a machine alongside that is able to sift mountains of data, picking out trends and anomalies and highlighting things that need human attention. Erik Bailey, a technology development task manager at the Jet Propulsion Laboratory in Pasadena, California, sees a role for machine-advised lunar landings. The goal is an astronaut-friendly, reliable, fast-acting set of remote-sensing aids to make lunar and planetary touchdowns safer. NASA has developed such a system, called Autonomous Landing and Hazard Avoidance Technology. The difference between ALHAT and the Instrument Landing System for airplane pilots is that ILS guides a pilot to the threshold of a well-known strip of paved land, marked with beacons. Not so with ALHAT, which has mere moments to scan a looming terrain that lacks any beacons but could be fraught with hazards. Using laser ranging and imaging, ALHAT will circle patches of moonscape that are level and free of boulders and craters; astronauts can select from these or decide to proceed on their own judgment. Finally, using a Doppler laser, ALHAT will help the craft land right side up and vertically, regardless of dust clouds.

“The point is to see a 30-centimeter hazard from one kilometer away [a one-foot hazard from 3,280 feet away],” says Bailey. “The purpose is to give humans the information and confidence to land safely. But it doesn’t take humans out of the loop…. The idea of ALHAT is to identify safe landing circles. Then the astronauts can choose among those, based on priorities like fuel use.”

Though the six Apollo lunar modules all landed safely without an ALHAT on board, the information provided to the astronauts about the landing sites had gaps in it, and poor visibility from rocket-raised dust made it tough to see obstacles and to stop dangerous drift to the side or rear. After the Apollo 15 landing, astronauts emerged to find that their lander straddled a crater rim. This canted the lander off horizontal and damaged the exhaust engine bell. Serious damage or a major lean would have left the pair stranded on the moon, without any hope of rescue.

Are astronauts comfortable with the level of automation provided by ALHAT? “As with Apollo, astronauts can take manual control, or choose to defer to the automated system,” Bailey says. “The astronauts say that they’re in support of this technology.”

“You can automate the piloting really well, such as for landing,” says Ed Gibson, who in 1973-1974 flew on the third mission to Skylab. “That you can do. The real question is the on-site judgment, to sense the situation and make rational judgments. Man’s unique ability is to assimilate data and make decisions, not to be an expensive replacement for robots.” Gibson recalled tedious jobs on Skylab: “We ran all the camera systems. We had a long checklist—every two seconds, push a button. But the best photography we did was with hand-held cameras out the windows…. There’s nothing worse than wasting men on doing robotics stuff.” As Gibson sees it, striking the proper balance between jobs for astronauts and jobs for robots depends on the mission. “If we think humans have a place on Mars and there’s a need to terraform it for colonies to grow, then getting men there is necessary. But when we just have scientific objectives, then we should use unmanned methods.”

Still, some fear that if only for budgetary reasons, humanity is going to be left at the spaceport, watching robots take to the skies and wistfully remembering Chesley Bonestell’s stunningly detailed, long-ago paintings of people hopping gaily across the moon and Mars.

That’s not what the robot-wranglers want. The day I visited Jet Propulsion Laboratory was also the day the last shuttle launched. Work on unmanned probes, rovers, and other automated gear came to a quick halt so staff could watch Atlantis lift off. The Von Kármán Auditorium rang with their cheers.

“Replacing scientists or geologists—that’s not what we’re about,” says JPL Mars mission engineer Ben Bornstein. “Absolutely, we need them. We need humans for exploration. I know for myself and my colleagues, we all feel the same way. We’re all very enthusiastic about space exploration.”

 Or as JPL’s Erik Bailey phrases it, “We need to start projecting ourselves off this rock.” 

James R. Chiles has been writing about history and technology since 1979. He blogs at Disaster- Wise.blogspot.com.

About James R. Chiles

James R. Chiles contributes frequently to Air & Space/Smithsonian. His book on the social history of helicopters and “helicoptrians” is The God Machine: From Boomerangs to Black Hawks.

Read more from this author

Comment on this Story

comments powered by Disqus