Listen to this article
A research group at Carnegie Mellon University’s Robotics Institute has developed a suite of robotic systems and planners that enable robots to explore unknown and treacherous and unknown environments more quickly and create more accurate and detailed maps. The Autonomous Exploration Research Team’s systems allow robots to explore completely autonomously, finding their way and creating a map without human intervention.
The CMU research team combined a 3D scanning lidar sensor, forward-looking camera, and inertial measurement unit sensors with an exploration algorithm to enable the robot to determine where it is now, where it has been, and where it should go next. These sensors can be attached to nearly any robotic platform. Right now, CMU’s group is using a motorized wheelchair and drones for much of its testing.
“You can set it in any environment, like a department store or a residential building after a disaster, and off it goes,” Ji Zhang, a systems scientist at the Robotics Institute, said in a release. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don’t even have to step into the space. Just let the robots explore and map the environment.”
The system allows robots to explore in three different modes. In the first mode, a person can control the robot’s movement and direction while autonomous systems keep it from crashing into walls, ceilings, or other objects. In mode two, a person can select a point on a map and the robot will navigate to that point. In the final mode, the robot sets off on its own and investigates the entire space to create a map.
CMU’s researchers have been working on exploration systems like this one for over three years. So far, the system has explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus.
The system is more efficient than previous approaches to robotic navigation and mapping. It can create more complete maps while reducing the run time in half. It’s flexible enough to work in low-light and treacherous conditions where communication is spotty, like caves, tunnels, and abandoned structures.
The group’s most recent work appeared in Science Robotics, which recently published “Representation Granularity Enables Time-Efficient Autonomous Exploration in Large, Complex Worlds” online.