Road/Lane Detection Evaluation 2013
This benchmark has been created in collaboration with Jannik Fritsch and Tobias Kuehnl from Honda Research Institute Europe GmbH. The road and lane estimation benchmark consists of 289 training and 290 test images. It contains three different categories of road scenes:
- uu - urban unmarked (98/100)
- um - urban marked (95/96)
- umm - urban multiple marked lanes (96/94)
- urban - combination of the three above
Ground truth has been generated by manual annotation of the images and is available for two different road terrain types: road - the road area, i.e, the composition of all lanes, and lane - the ego-lane, i.e., the lane the vehicle is currently driving on (only available for category "um"). Ground truth is provided for training images only.
- Download base kit with: left color images, calibration and training labels (0.5 GB)
- Download right color image extension (0.5 GB)
- Download grayscale image extension (0.3 GB)
- Download Velodyne laser point extension (1 GB)
- Download OXTS GPS/IMU extension (1 MB)
- Download development kit (1 MB)
- Mapping of training set to raw data sequences (1 MB)
We evaluate road and lane estimation performance in the bird's-eye-view space. For the classical pixel-based evaluation we use established measures as discussed in our ITSC 2013 publication. MaxF: Maximum F1-measure, AP: Average precision as used in PASCAL VOC challenges, PRE: Precision, REC: Recall, FPR: False Positive Rate, FNR: False Negative Rate (the four latter measures are evaluated at the working point MaxF), F1: F1 score, HR: Hit rate. For the novel behavior-based evaluation a corridor with the vehicle width (2.2m) is fitted to the lane estimation processing result and evaluation is performed for 3 different distance values: 20 m, 30 m, and 40 m. We refer to our ITSC 2013 publication for more details.
IMPORTANT NOTE: On 09.02.2015 we have improved the accuracy of the ground truth and re-calculated the results for all methods. Please download the devkit and the dataset with the improved ground truth for training again, if you have downloaded the files prior to 09.02.2015. Please consider reporting these new number for all future submissions. The last leaderboards right before the changes can be found here!
- Stereo: Method uses left and right (stereo) images
- Laser Points: Method uses point clouds from Velodyne laser scanner
- GPS: Method uses GPS information
- Additional training data: Use of additional data sources for training (see details)
Road Estimation Evaluation
UM_ROAD
Table as LaTeX | Only published Methods
UMM_ROAD
Table as LaTeX | Only published Methods
UU_ROAD
Table as LaTeX | Only published Methods
URBAN_ROAD
Table as LaTeX | Only published Methods
Lane Estimation Evaluation
Behaviour Evaluation
Related Datasets
- Multi-Lane-Detection-Dataset: Dataset for multiple lane detection.
- Road Scene Layout from a Single Image: Dataset for road area estimation.
- MIT Street Scenes: Dataset for semantic road scene understanding.
- Cambridge-driving Labeled Video Database (CamVid): Dataset for semantic road scene understanding.
- Daimler Scene Labeling Dataset: Dataset for semantic road scene understanding including stereo images.
- ROMA (ROad MArkings): Dataset for performance evaluation of road marking extraction algorithms.
Citation
When using this dataset in your research, we will be happy if you cite us:
@inproceedings{Fritsch2013ITSC,
author = {Jannik Fritsch and Tobias Kuehnl and Andreas Geiger},
title = {A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms},
booktitle = {International Conference on Intelligent Transportation Systems (ITSC)},
year = {2013}
}