Scale and nuTonomy, the leading players in the self-driving vehicle ecosystems, open-sourced a research dataset named nuScenes last week. According to the companies, this is the largest open source dataset for self-driving vehicles, which includes data from LIDAR, RADAR, camera, IMU, and GPS.
nuTonomy with the help of Scale’s Sensor Fusion Annotation API, compiled more than 1,000 20-second clips and 1.4 million images. nuScenes comprises of 400,000 sweeps of LIDARs and 1.1 million three-dimensional boxes detected with the combination of RGB cameras, RADAR, and LIDAR.
The collection of this much data was facilitated by six cameras, one LIDAR, five RADARs, GPS, and an inertial measurement sensor. They chose the driving routes in Singapore and Boston to showcase challenging locations, times, and weather conditions.
This open-source dataset reportedly surpasses in terms of size and accuracy of common datasets including public KITTI dataset, Baidu ApolloScape dataset, Udacity self-driving dataset, and even the more recent Berkeley DeepDrive dataset.
Making this huge dataset available to the users will facilitate training and testing different algorithms for autonomous driving, accurately and quickly. Scale CEO Alexandr Wang said:
“We’re proud to provide the annotations … as the most robust open source multi-sensor self-driving dataset ever released. We believe this will be an invaluable resource for researchers developing autonomous vehicle systems, and one that will help to shape and accelerate their production for years to come.”
You can read more about nuScenes in this full coverage. To know more about nuScenes check out its website and also see the official announcement by Scale on its Twitter page.
Google launches a Dataset Search Engine for finding Datasets on the Internet
Ethereum Blockchain dataset now available in BigQuery for smart contract analytics
25 Datasets for Deep Learning in IoT