The Astronaut Question

How long will humans remain better than robots at exploration?

When John Glenn (here looking through a training device) became the first American to orbit Earth, a yaw thruster caused attitude control problems, so he flew the last leg manually. Half a century later, spaceflight still requires both automation and human skill. (NASA JSC)
Air & Space Magazine

For 50 years, the heart of the American space program has been manned flight. When the space shuttle program ended, it was a historic moment that triggered a wave of commentaries and interviews on the “man in space” theme. Former astronauts like Neil Armstrong and NASA veterans like Christopher Kraft Jr. complained that by shelving the shuttle before a NASA-operated manned spacecraft was on hand to service the International Space Station, America had fallen down on the job.

From This Story

“Man in Space”? How about “Robot in Space”? In spaceflight, robots are on the rise. When it comes to post-shuttle missions carrying humans to orbit, machines do most of the flying, and increasingly, all of it. Presently, the only spacecraft that ISS-bound astronauts can travel in are the Russians’ Soyuz capsules, and those flights are entirely controlled by the digital Kurs system, which employs radar signals for homing. Kurs also guides the current Russian Progress cargo carriers. When Kurs works as planned, which is most of the time, the Soyuz and Progress vehicles are able to zoom all the way from launch at Baikonur, Kazakhstan, to docking with the ISS without any human intervention. Soyuz passengers have only to open the hatch and float out. In case of malfunction, they can use manual emergency control.

The European Space Agency’s Automated Transfer Vehicle, which after three missions also has proven able to fly from Earth to the station docking port without human intervention, uses an automation approach different from the Russians’, relying on relative-GPS signals and videometers. Japan’s H-2 Transfer Vehicle and the newly tested Dragon spacecraft by Space Exploration Technologies (SpaceX) are “near-dockers”—they automatically arrive close enough for the ISS’s Canadarm2 to reach out and grab them for docking.

Besides SpaceX, NASA is funding three U.S. companies—Boeing, Sierra Nevada, and Blue Origin—for the second stage of the Commercial Crew Development program, which is ultimately supposed to produce an astronaut-carrying craft. Former astronaut Linda Godwin has said the post-shuttle fleet is grouping into “rental cars” and “taxis.” “Rental car” is analogous to a privately built spacecraft that NASA leases from the owner, with agency astronauts taking the controls. A space taxi would be one carrying NASA mission specialists or other astro-passengers, with the craft directed by the owner-operator’s crew members.

When comparing spacecraft-driving to car-driving, one more analogy is needed: the automated, driverless car. During the 2010 VisLab Intercontinental Autonomous Challenge, four electric automatic automobiles got themselves from Italy to China. After more than a quarter-million miles on the road, Google’s Self-Driving Car now has a license to roam Nevada, albeit with an engineer behind the wheel, who, says Google, hardly ever needs to take control.

The next wave of manned orbital craft now being built promise to be equally automated when flown. Like the H-2, the Dragon spacecraft from SpaceX will park within arm’s length of the station. On a routine, no-glitches mission in which Boeing’s CST-100 flies to the ISS, the astronauts will leave all the driving to robots, which will use a navigation system evolved from Orbital Express, its unmanned satellite-rendezvous mission for the Defense Advanced Research Projects Agency. In that 2007 experiment, one satellite intelligently chased down another, latched on, and exchanged fuel and components, all without human control.

At launch and during ascent, the CST-100’s computer starts with mission plans and an updated knowledge of its own position, obtained via the inertial guidance system. As it closes to within about 20 miles of the ISS’s estimated position, it will trigger a bank of visual and infrared cameras, plus a laser-ranging device called LIDAR. Says Michael Burghardt, who leads a Boeing engineering team developing manned spacecraft, “Inertial guidance tells where the CST is, and sensors tell where the station is.” As the range decreases, the cameras provide a stream of increasingly detailed data about the two objects’ positions, because the CST’s computer compares the camera images to a stored three-dimensional digital model of the station.

All this hands-off flying may seem a startling departure from the pilot-focused days of NASA, but in its later years, the standard shuttle flight profile leaned on machines. The pilot and commander on the flight deck operated thrusters and other controls only during two brief intervals: One was during the last half-mile in docking with the ISS, and the other was the last 10 miles of the flight back to Earth. And at those times their actions were guided by sensors and computers, in the manner of an airline pilot who steers her 737 to landing but under the guidance of a localizer/GPS system’s instructions. That’s barring mishaps and malfunctions, which we’ll get to.

Why are robotic spacecraft on the rise? Compared with the early days of spacefaring, missions to orbit today are highly routinized affairs, and the predictability of the job favors robot brains. In the case of a computer glitch, there are many ways to save the situation and even, if need be, undertake an orbital rescue.

The long answer is more interesting, because it suggests that a new kind of robot-human partnership might shape up when we have the means to bootstrap ourselves out of low Earth orbit—when the missions will change from transporting the known to exploring the unknown.

Comment on this Story

comments powered by Disqus