The Robot Report

  • Research
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • Grippers / End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors / Sensing Systems
    • Soft Robotics
    • Software / Simulation
  • Development
    • A.I. / Cognition
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Defense / Security
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
  • Investments
  • Resources
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50
    • Search Robotics Database
    • Videos
    • Webinars
  • Events
    • RoboBusiness Direct
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
  • Podcast

UC Berkeley open-sources BDD100K self-driving dataset

By Steve Crowe | June 5, 2018

BDD100K self-driving dataset

UC Berkeley has released to the public its BDD100K self-driving dataset. The BDD100K self-driving dataset is quite vast with 100,000 videos that can be used to further technologies for autonomous vehicles. The dataset is part of the university’s DeepDrive project that is investigating state-of-the-art technologies in computer vision and machine learning for automotive applications.

Developers can download the BDD100K self-driving dataset here and read more about it in this academic paper. Each video in the BDD100K self-driving dataset is about 40 seconds long and are viewed in 720p at 30 frames per second. According to the researchers, the videos were collected from about 50,000 trips on streets throughout the United States.

The videos were shot at different times of the day and in various weather conditions. Datasets like this are vital to teaching autonomous systems how to cope with different environments and driving conditions.

The UC Berkeley team said the BDD100K self-driving database contains about one million cars, more than 300,000 street signs, 130,000 pedestrians, and much more. The videos also contain GPS locations (from mobile phones), IMU data, and timestamps across 1100 hours.

Related: MapLite enables autonomous vehicles to navigate unmapped roads

BDD100K self-driving dataset

BDD100K isn’t the only self-driving dataset available, but it is the largest.

BDD100K will be especially suitable for computer vision training to detect and avoid pedestrians on the street, as it contains more people than other datasets. CityPerson, a dataset specialized for pedestrian detection, has only about one-quarter the people per image that BDD100K does.

BDD100K isn’t the first publicly-available self-driving dataset, but it is the largest. Baidu in March released its ApolloScape dataset, but BDD100K is 800 times larger. It’s also 4,800 times bigger than Mapillary’s dataset and 8,000 times bigger than KITTI.

Annotating the BDD100K dataset

Classifying all the objects in each of these videos would be quite time-consuming for developers. So UC Berkley has already done that work for you, annotating with 2D bounding boxes more than 100,000 images contain objects like traffic signs, people, bicycles, other vehicles, trains, and traffic lights.

BDD100K self-driving dataset

BDD100K has two types of lane margins.

The annotated videos have two types of lane margins: vertical lines are colored red while parallel lines are colored blue. The researchers also want to take road segmentation to the next level. They’ve divided the drivable area into two categories: “directly drivable area” (red areas) and “alternatively drivable area” (blue areas). We’ll let the researchers explain:

“In our dataset, the ‘directly drivable area’ defines the area that the driver is currently driving on – it is also the region where the driver has priority over other cars or the “right of the way”. In contrast, ‘alternatively drivable are’ is a lane the driver is currently not driving on, but could do so – via changing lanes. Although the directly and alternatively drivable areas are visually indistinguishable, they are functionally different, and requires potential algorithms to recognize blocking objects and scene context.”

The researchers continue, “in align with our understanding, on highway or city street, where traffic is closely regulated, drivable ar-eas are mostly within lanes and they do not with the vehicles or objects on the road. However, in residential areas, the lanes are sparse. Our annotators can judge what is drivable based on the surroundings.”

BDD100K self-driving dataset

The BDD100K self-driving dataset divided the drivable area into two categories: “directly drivable area” and “alternatively drivable area.”

To annotate all this data, the researchers built a semi-autonomous tool that speeds up labeling bounding boxes, semantic segmentation, and lanes in the driving database. The tool can be accessed via a web browser.

For box annotation, for example, the team trained a Fast-RCNN object detection model to learn from 55k labeled videos. The model will work alongside human annotators and, the researchers said, save 60 percent of the time required for drawing and adjusting bounding boxes.

“Our annotation system incorporates different kinds of labeling heuristics to improve productivity, and can be extended to different types of image annotation,” the researchers wrote. “With this production-ready annotation system, we are able to label a driving video dataset that is larger and more diverse than existing datasets. This dataset comes with comprehensive annotations that are necessary for a complete driving system.

“Moreover, experiments show that this new dataset is more challenging and more comprehensive than existing ones, and can serve as a good benchmark for domain adaption due to its diversity. This will serve to help the research community with understanding on how different scenarios affect existing algorithms’ performance.”

BDD100K self-driving dataset

The back- (left) and front-end of BDD100K’s labeling tool.

About The Author

Steve Crowe

Steve Crowe is Editor of The Robot Report and co-chair of the Robotics Summit & Expo. He joined WTWH Media in January 2018 after spending four-plus years as Managing Editor of Robotics Trends Media. He can be reached at [email protected]

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

Cruise Origin Japan
Cruise autonomous vehicles heading to Japan
Cruise Microsoft
Cruise raises $2B, partners with Microsoft on autonomous vehicles
WeRide
China’s WeRide raises another $110M for autonomous vehicles
Kodiak’s autonomous trucks reach highway milestone

Robotics Year in Review

The Robot Report Listing Database

Latest Robotics News

Robot Report Podcast

Teradyne’s acquisition strategy & the future of cobot

The Robot Report Podcast · Teradyne's acquisition strategy & the future of cobots

Sponsored Content

  • Doosan Robotics: Driving Innovation and Growth in Cobots
  • FORT Robotics Podcast: FORT Robotics on how to keep humans safe and in control of robots
  • Pallet Detection Systems Help Automated Forklifts Modernize Warehouse Operations
  • IES Servo Control Gripper
  • How to cut the cost of manufacturing

Tweets by RoboticTips

The Robot Report
  • Collaborative Robotics Trends
  • Field Robotics Forum
  • Healthcare Robotics Engineering Forum
  • RoboBusiness Event
  • Robotics Business Review
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Advertising
  • Contact Us

Copyright © 2021 WTWH Media, LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media. Site Map | Privacy Policy | RSS

Search The Robot Report

  • Research
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • Grippers / End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors / Sensing Systems
    • Soft Robotics
    • Software / Simulation
  • Development
    • A.I. / Cognition
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Defense / Security
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
  • Investments
  • Resources
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50
    • Search Robotics Database
    • Videos
    • Webinars
  • Events
    • RoboBusiness Direct
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
  • Podcast