Humans are dependent on their senses. Without external inputs from the outside world, we are unable to locate, identify, navigate, operate, and much more. Similarly, robots require advanced perception for obstacle avoidance and to function in a world designed by and for humans. However, while vision is an inexpensive sensing system for robots, it is prone to errors for various causes such as reflective surfaces, blur, and texture-less scenes.
A research duo at the College of Information and Computer Sciences at the University of Massachusetts Amherst is using the Jackal unmanned ground vehicle (UGV) from Clearpath Robotics Inc. to train competency-aware, vision-based obstacle avoidance systems that can predict such failures. This would allow for less-expensive and safer robot deployments. It is only through learning competence-aware perception algorithms that one can predict their failure cases and reason about their types of failure.
In their Introspective Vision for Obstacle Avoidance (IVOA) project, Ph.D. student Sadegh Rabiee and Assistant Professor Joydeep Biswas are focusing on learning the competency of computer vision algorithms for the task of obstacle avoidance.
Giving Jackal eyes for obstacle avoidance
To prepare the Jackal UGV for the task, Rabiee and Biswas equipped the platform with a pair of stereo cameras, as well as a depth camera to provide sparse ground truth. These would be used to log RGB and depth images at full frame rate during deployments of the robot. Those images are then separately processed for obstacle avoidance.
When the plans generated by the two disagree, and when the environment is known to be one where the depth sensor is more reliable, the point where the paths diverge is projected onto the image plane of the RGB image. Consequently, an image patch centered at that location is extracted as an example of unreliable image conditions.
By using such image patches that yield failures of the image-based algorithm, the team can train an introspection model to learn to predict the following:
- Will an input image yield failures?
- Which parts of the image are likely to cause such failures?
- What type of failure is expected? (false positive vs. false negative)
- How many distinct classes of failures were extracted from the training set?
Tailoring Jackal UGV for research in long-term autonomy for mobile robots required the UMass team to customize the platform with their own desired sensors and payloads. To begin, they attached a stereo pair of Point Grey cameras for vision-based SLAM (simultaneous localization and mapping) and obstacle avoidance, as well as Kinect depth sensors to provide ground truth for their vision-based obstacle avoidance system.
Next, a Velodyne Lidar (VLP 16) was used for research on lidar-based SLAM, as well as ground truth generation for vision-based SLAM.
The final pieces of the hardware puzzle included a touch screen monitor for human-robot interaction and an Intel NUC for additional computing power. The touch screen enables quick and subtle debugging, as well as allowing the robot to interact with humans via a graphical user interface. An example of this would be a robot seeking help from people nearby for using the elevator.
In terms of software, the IVOA team developed their own complete ROS-based stack for autonomous navigation of the robot, including SLAM implementation, obstacle avoidance, and planning.
Fitting Clearpath into the equation
Where does Clearpath Robotics fit in? Well, IVOA’s data collection process requires having a mobile robot platform capable of operating reliably for extended periods of time in different environments and traversing different types of terrain.
At the same time, IVOA relies on extensive self-labeled training data, which would have been infeasible to collect manually. Using Jackal UGV enabled the acceleration of their research process in three ways:
- Eliminating the need to build and maintain their own UGV. This would have required a significant amount of time and effort
- Providing a flexible, modular platform with hardware and software support to easily experiment with different sensor configurations
- Proving to be a robust, low-maintenance research platform
The IVOA team were already confident in using Jackal UGV as they had done so in the past for their project titled “A Friction-Based Kinematic Model for Skid-Steer Wheeled Mobile Robots”, which was published at ICRA 2019 (IEEE International Conference on Robotics and Automation).
The researchers chose Jackal UGV because it was a good fit for developing their algorithms and eliminated the need to design and build their own robot, ultimately allowing for a smooth experience when conducting experiments for their research.
“Clearpath Robotics makes research in robotics easier via providing reliable robot platforms that are easy to customize,” said Rabiee. “Also, we have found the support team at Clearpath to be very responsive and helpful”.
Looking ahead
The IVOA project continues to be improved upon, and the team has plans to extend its work on competence-aware perception systems to more perception problems such as vision-based SLAM. The researchers’ ultimate goal is to deploy their autonomous robots at the university campus scale and have them perform safe navigation via competence-aware vision systems.
Currently, they have been able to successfully introduce IVOA, an architecture for self-aware, stereo vision-based obstacle avoidance systems capable of predicting their failures, while distinguishing between false positive and false negative instances. This project will also appear at IROS 2019 (IEEE/RSJ International Conference on Intelligent Robots and Systems).
The Robot Report has launched the Healthcare Robotics Engineering Forum, which will be on Dec. 9-10 in Santa Clara, Calif. The conference and expo focuses on improving the design, development, and manufacture of next-generation healthcare robots. Learn more about the Healthcare Robotics Engineering Forum.
Tell Us What You Think!