Developers need large amounts of high-quality, annotated data to train autonomous vehicles. Motional Inc., the joint venture of Hyundai Motor Group and Aptiv PLC, today announced an expansion to the nuScenes dataset for teaching such vehicles how to engage with dynamic environments. The company said the additions will help self-driving cars be safer.
Motional, which Hyundai and Aptiv named last month, is an extension of their partnership working to develop and commercialize SAE Level 4 autonomous vehicles. The company said its team has experience with the DARPA Grand Challenge, the first fully autonomous cross-country drive in the U.S., and the launch of the first robotic taxicab pilot in Singapore. Boston-based Motional has offices Boston, Pittsburgh, Las Vegas, Santa Monica, Singapore, and Seoul.
nuScenes grows community collaboration
Created in 2018, nuScenes is the first publicly available dataset of its kind. nuScenes began as a collection of 1,000 urban street scenes in Boston and Singapore. These scenes, composed of millions of photos and data points collected from vehicle sensor suites, were then meticulously hand-annotated to inform driverless machine learning models.
More than 8,000 researchers have used nuScenes, and they have published over 250 scientific papers using the data. Since its launch, more than 10 new datasets have been made publicly available. The collection has helped create a thriving culture of information sharing for autonomous vehicle safety in the industry, claimed Motional.
“Safety transcends competition,” stated Karl Iagnemma, president and CEO of Motional and co-founder of nuTonomy. “The belief that passenger safety must take priority over any competitive advantage is at the heart of nuScenes. We’ve been delighted to see so many peers follow suit and release their own datasets, all for the betterment of the industry.”
Lidar segmentation and more annotated images added
Motional announced two nuScenes additions that it said make the datasets more robust. The first, nuScenes-lidarseg, is the application of lidar segmentation to the original 1,000 Singapore and Boston driving scenes, making it the largest publicly available dataset of its kind.
Lidar segmentation provides a more detailed picture of a vehicle’s surroundings than the original nuScenes’ bounding boxes, adding 1.4 billion annotated lidar points, said Motional. The company said the segmentation will enable researchers to study and quantify novel challenges such as lidar point-cloud segmentation and foreground extraction.
The second dataset, nuImages, is brand-new. It includes nearly 100,000 annotated images, carefully selected to generate a wide range of unpredictable, challenging driving conditions, Motional said. The nuImages dataset was created in response to user demand, and it could help self-driving vehicles safely navigate unusual scenarios, such as jaywalkers, large intersections, and challenging weather conditions.
Motional safety history
Motional was part of the team that last year released the “Safety First for Automated Driving” white paper, an organized framework for the development, testing and validation of safe automated passenger vehicles. The white paper has been widely regarded as the new standard in safety guidelines, claimed the company.
It also claimed that it operates the world’s most-established public robotaxi fleet in Las Vegas. Since 2018, that fleet has provided more than 100,000 rides, with 98% of riders rating their experience five out of five stars, said Motional.
nuScenes is free of charge for academic use, and licensing is available for commercial purposes. Annotations for nuScenes were provided by Scale AI, which raised $100 million in funding last year. For more information about nuScenes, visit the organization page hosted by Motional.
Tell Us What You Think!