Q- page 4 | Space | Air & Space Magazine
Current Issue
October 2014 magazine cover
Subscribe

Save 47% off the cover price!

Q

When the job demands ingenuity, NASA engineers whip gadgets worthy of James Bond.

Air & Space Magazine | Subscribe

(Continued from page 3)

Vranish's wrench incorporates something known as a 3-D sprag, which permits the wrench to travel in only one direction through a wedging action. "A 2-D sprag is basically a roller which locks in one direction and slips in the other," he explains. "A 3-D sprag is like a disc with wedges and contacts on the surface of those wedges. It locks up better, is more compact, and can withstand more force. It is a fundamentally new mechanical component."

This technology-which also works better than conventional ratchet wrenches in tight spots on Earth-makes ratchetless wrenches possible. NASA is negotiating with several well-known companies that want to market the wrench for industrial and consumer applications. So this is one gadget that James Bond will be able to pick up at Home Depot.


Robonaut

Robots are slowly beginning to look the way we expect them to-that is, like us. Robonaut, being developed by engineers at NASA's Johnson Space Center in Houston, is heading seriously in that direction-and for some very good reasons. "All the robotic devices we've flown on the space shuttle so far have been very large-scale manipulator systems and require specialized [fixtures and attachments] to be utilized," says Chris Culbert, head of the robotic technologies branch at JSC. "But most of NASA's vehicles are designed around humans for maintenance. So we set out to design a human-form robot." This, he says, saves NASA time and money, since engineers can eliminate robot-specific attachments and astronauts can be assisted by robots with a greater range of access and activity.

The current prototype has two arms, two hands, a torso, and a head. It is controlled by a human wearing a virtual reality hood-and-glove system, though its designers hope to eventually give it more autonomy. Their biggest challenge to date, however, has been equaling the engineering of the human hand and simply getting the robot to do what a human wearing a spacesuit glove can do. "That was our first real breakthrough," Culbert says. "We were shrinking existing technology to create a human-sized hand that has all the same movement and strength."

Robonaut will most likely be placed into service aboard the ISS or the shuttle, where its pogo-like leg can be attached anywhere on the vehicle-rather than at a single point as with robotic arms-to conduct repairs, install equipment, and assist with experiments while being controlled by an astronaut inside. For planetary exploration, Robonaut can be mounted on a wheeled rover, like a centaur. Looking even farther into the future, Culbert expects to one day complete the humanoid form. "We're facing some interesting balance and strength challenges," he says. "We as humans have a slew of systems that allow us to lean over and pick something up, but Robonaut will need very advanced software and control systems to do that. Maybe someday, though."


Object Recognition Processor

Like the Johnson Space Center engineers who modeled Robonaut after the human form, computer scientists at the Jet Propulsion Laboratory turned to the human model for their latest effort-though they looked more inward than out. Their Three-Dimensional Artificial Neural Network (3DANN) processor models the neural networks of the human brain to allow machines to identify objects practically as well as people do. "This little camera system can very quickly zoom onto specific features of objects to recognize them," says JPL computer scientist Anil Thakoor. "With the human eye, for example, you will not really recognize a car by its actual measurements, but your brain will recognize the concept of a car. That is what this cube is good at: recognizing objects by looking at their inherent qualities."

The sugar-cube-size processor can execute one trillion operations per second while consuming only 8 watts of power. This performance is several orders of magnitude greater than the capabilities of state-of-the-art desktop computers, which deliver about one billion operations per second while consuming 200 watts of power.

This means that weight- and power-sensitive spacecraft will be able to navigate visually and identify landing sites and obstacles on their own, without wasting time-and money-going back and forth with NASA controllers on Earth. Furthermore, planetary robots and rovers will be capable of autonomous selection of scientifically interesting features to study. Back home, the camera has already proven itself able to identify a cruise missile in various orientations, scales, and lighting conditions and amid a high level of background clutter.

Comment on this Story

comments powered by Disqus