Listen to this article
|
Legged robots have lagged behind wheeled, tracked, and flying ones for gathering data for useful industrial insights, but that is changing. Boston Dynamics today announced Release 2.1 of its Spot quadruped robot. The Waltham, Mass.-based company said the update includes features to “make Spot immediately useful out of the box for autonomous data-collection missions.”
In May, Boston Dynamics released Spot 2.0, which included improved autonomy and mobility. In June, it announced the commercial availability of the legged robot. The company has continued to make news with potential applications of Spot, from construction to healthcare, and speculation that SoftBank Group is considering selling it to Hyundai Motor.
Spot 2.1 enhancements based on user feedback
“In Spot’s first year on the market, we’ve seen diverse teams in an array of industries put the robot to use,” said Boston Dynamics. “During this time, we worked closely with hundreds of Spot users to understand their application development workflow: how they attach sensors, analyze data, and integrate the robot into their existing systems. We identified common obstacles and mapped out an easier path to implementation.”
Spot 2.1 is intended to make it easier for users to attach sensors, collect and save data, and integrate data into their existing systems, the company said. Operations teams can send the quadruped robot to autonomously and repeatedly gather data from hazardous or remote sites, it said.
Spot 2.1 designed to ease sensor integration
With Spot 2.1’s Autowalk feature, businesses can automate image-collection workflows and aim the pan-tilt-surveillance (PTZ) camera to collect detailed inspection photos, said Boston Dynamics.
All users need to do to attach sensors such as off-the-shelf spherical or thermal cameras is edit an example script and install its docker container onto the Spot CORE compute payload, according to the company. The new image sources will appear on Spot’s tablet controller, and users can trigger captures in both teleoperation and Autowalk modes.
“Spot can now be used to collect training images for computer vision models, to visualize data and model output live on the tablet controller, and to capture data from custom non-visual sensors like gas detectors or laser scanners,” Boston Dynamics said.
Users can add image metadata
Spot 2.1 also enables commercial customers to define data origins and attach custom metadata. They can associating images with the robot’s location, user-defined labels, or custom values such as GPS coordinates from an attached payload.
Examples of how context can be added to data include combining site photos from multiple missions into a single view, sorting images by asset ID, or collecting datasets for computer vision model training, said Boston Dynamics. The company said it recognizes standard data types, such as JPEG images and JSON and CSV metadata files to eliminate integration bottlenecks and allow developers to write their own data streams into the robot’s logs.
Spot 2.1 supports improved workflows
In addition, Boston Dynamics said it has streamlined the data-collection workflow in Spot 2.1. Operators can now download mission data to a tablet for easy integration into third-party tools, it said.
“Users can capture data manually or autonomously in Autowalk and download it to the tablet’s SD card for easy off-robot use. Common actions and callbacks can be configured on the tablet for quick use during operation,” said the company. “We’ve also made numerous under-the-hood improvements to Spot’s industry-leading locomotion and autonomy, further enabling operators to focus on the job and not the robot.”
Boston Dynamics added that “these new features in Release 2.1 unlock Spot’s full data-collection potential and set the stage for exciting new capabilities coming early next year: self-charging and remote operation.” Spot 2.1 is available now.
Related content: The Robot Report Podcast: FedEx’s Aaron Prather on mobile robot standards and Boston Dynamics sale rumors
Tell Us What You Think!