A geometric approach to mobile robot navigation and obstacle avoidance may be sufficient for environments such as warehouses, but it might not be enough for dynamic settings outdoors. Researchers at the University of California, Berkeley, said they have developed BADGR, “an end-to-end, learning-based mobile robot navigation system that can be trained with self-supervised, off-policy data gathered in real-world environments, without any simulation or human supervision.”
Field robots must be able to find their way through tall grass, across bumpy ground, or in areas without the lanes typical of indoor facilities or roads. The conventional approach is to use computer vision and train models based on semantic labeling.
“Most mobile robots think purely in terms of geometry; they detect where obstacles are, and plan paths around these perceived obstacles in order to reach the goal,” wrote UC Berkeley researcher Gregory Kahn in a blog post. “This purely geometric view of the world is insufficient for many navigation problems.”
However, a robot could autonomously learn about features in its environment “using raw visual perception and without human-provided labels or geometric maps,” said the study‘s authors, Kahn, Pieter Abbeel, and Sergey Levine. They explored how a robot could use its experiences to develop a predictive model.
The research was supported by the U.S. Army Research Lab’s Distributed and Collaborative Intelligent Systems and Technology Collaborative Research Alliance (DCIST CRA), the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA) Assured Autonomy Program, and Berkeley DeepDrive. Kahn was supported by an NSF graduate research fellowship.
Building BADGR
The team at Berkeley AI Research Lab (BAIR) developed the Berkeley Autonomous Driving Ground Robot, or BADGR, to gather data from real-world environments and essentially train itself how to avoid obstacles. It was based on a Clearpath Jackal mobile robot and included a six-degree-of-freedom inertial measurement unit sensor, GPS, a 2D lidar sensor, and an NVIDIA Jetson TX2 processor.
Rather than retrain policies with recently gathered data, or on-policy data collection, the Berkeley researchers decided to use off-policy algorithms, which can train policies using data gathered by any control policy. BADGR also used a time-correlated, random-walk control policy so that the robot would not just drive in a straight line.
BADGR autonomously collected and labeled data, trained an image-based predictive neural network model, and used that model to plan and execute paths based on experience, said Kahn.
BAIR gets results
The researchers tested BADGR at the Berkeley Richmond Field Station Environmental site. With only 42 hours of autonomously collected data, BADGR outperformed Simultaneous Localization and Mapping (SLAM) approaches, said the BAIR team. It did so with less data than other navigation methods, it wrote.
“We performed our evaluation in a real-world outdoor environment consisting of both urban and off-road terrain,” stated the researchers. “BADGR autonomously gathered 34 hours of data in the urban terrain and eight hours in the off-road terrain. Although the amount of data gathered may seem significant, the total dataset consisted of 720,000 off-policy data points, which is smaller than currently used datasets in computer vision and significantly smaller than the number of samples often used by deep reinforcement learning algorithms.”
For instance, a SLAM plus planner-based system failed to avoid bumpy grass, while BADGR learned to stick to concrete paths. The mobile robot also avoided collisions in off-road environments more often.
BAIR’s experiments also found that BADGR’s performance improved over time, as it picked a more direct route to a target. The system was also able to generalize its lessons to new environments.
“The key insight behind BADGR is that by autonomously learning from experience directly in the real world, BADGR can learn about navigational affordances, improve as it gathers more data, and generalize to unseen environments,” Kahn wrote.
The researchers acknowledged that the mobile robot still required human assistance, such as when it flipped over, but they noted that BADGR needed less data than other approaches. They said more work remains to be done with remote support, testing around moving objects and people, and gathering more data.
“We believe that solving these and other challenges is crucial for enabling robot learning platforms to learn and act in the real world, and that BADGR is a promising step towards this goal,” the team said.
Tell Us What You Think!