The Quadcopter’s Teacher

Jonathan Aitken wants robots to navigate—and cooperate—on their own.

Jonathan Aitken.jpg
Jonathan Aitken with one of his students.

Jonathan Aitken’s six-member team at the universities of Sheffield, Liverpool, and Surrey has developed intelligent flying robots, called quadcopters, that can learn about their surroundings by using a forward-facing camera at the front of the vehicle. By overlaying different frames from the camera and selecting key reference points within the scene, the robot builds up a 3D map of the world around it. Other sensors pick up barometric and ultrasonic information, which give the robot additional clues about its environment. All the data is fed into autopilot software, allowing the robot to navigate safely and fly to specific items in the room. Aitken spoke with Executive Editor Paul Hoversten in July.

Air & Space: What reference points do these quadcopters use in determining how to safely navigate their flight?

Aitken: They use points of interest in the rooms. This can be anything and everything within the space but nothing specific, just features of distinction in the room. Often these are corners of objects or changes in surface. By tracking these points, and knowing how the quadcopter has moved itself, these points can be localized in space. We identify individual objects that we then place within this [computerized] map, giving us the direct location of these specific objects. Once we have this information, both about the room and the locations of objects, we can perform closer inspection of these items.

How long does it take for the robot to become familiar with its airspace?

That is very dependent on how good a representation you would wish to achieve. The longer the time available, the better the map [the robot can develop]. In the case of the video, this was produced in a single flight of between four and five minutes.

In early trials, what were some of the missteps that needed to be fixed? Were there, for example, collisions?

Of course! Whenever we develop anything, we have to learn from our mistakes, and whenever something is implemented on real hardware there are always hurdles to overcome that are different from simulation. So we've seen the platform be unstable when it has received control commands outside the bounds of the room, but as we have a human operator we can enact a safe mission abort and land without too much drama. I think our experience shows the importance of development using real hardware and robots where possible, and focusing on seeing real hardware as the ultimate end platform. Because there are different concerns when operating equipment in the real world compared with simulations, for example, delays in processing can build and mean that decisions get taken too late.

But now they’ve figured out how to fly past each other without colliding.

The learning process the robots use is similar to when two people meet in the street and need to get round each other. They will similarly go to their left or right until they coordinate and avoid collision.

When did this research begin?

This research is funded by the Engineering and Physical Sciences Research Council within the U.K., under a project named Reconfigurable Autonomy. We are currently in the second year of research [with funding through May 2016]. We're aiming to develop an architecture for autonomous systems that is capable of coping with failure and adapting to circumstances. This is especially important for deep space missions or those that take place in extreme or hazardous environments. Specifically, when we’re unable to get close to the robots to enact control and take care of problems. The robots are on their own and must need to be able to reconfigure themselves to complete their assigned tasks.

How much human oversight is needed during flight?

The quadcopters have a human overseeing them at present, but they provide no action to enact the mission. Primarily, the human is there so that we have a failsafe to shut the system down in case of problems.

What are the applications for this technology? How could these intelligent robots help us?

This technology could be incredibly useful in a number of scenarios. Typically, this will revolve around instances where humans physically cannot be in close proximity to control the hardware, for example in deep space missions where communication lags prevent real-time control. Or consider the scenario following a natural disaster where we wish to conduct a search and rescue mission—we could deploy a large number of vehicles to conduct the search without needing someone to control each one. Rather, a manager could take care of a fleet [that is] taking higher-level decisions and using their time more efficiently.

What is the next step in the research?

We wish to investigate more routes about reconfigurability on the platform. Answering the question of what can go wrong and when it does, what can we do with architecture on the platform to ensure that it can still progress, safely, with its assigned mission.

Get the latest stories in your inbox every weekday.