Scroll to Top

Robots Learn Obstacle Avoidance by Thinking Like Humans

Imagine a Roomba that calculates the path of obstacles and avoid collision as the object is moving. With support from Boeing, MIT is creating a robot that does just that.

MIT researchers explain their new visualization system that can project a robot's "thoughts."
MIT researchers explain their new visualization system that can project a robot’s “thoughts.”

A group of students have recently developed applications using a visualization system. One such project is investigating the possible role of drones to fight forest fires by both surveying and squelching fires, preventing them from spreading and wiping out more vulnerable vegetation.

[su_youtube url=”http://youtu.be/utM9zOYXgUY”]

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. The group has flown quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.
Future plans also include the using the system to test drone performance in package-delivery scenarios. Researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.
“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.
“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”