Publications
Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
Christos Sakaridis,
Dengxin Dai,
and Luc Van Gool
IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2020
[PDF]
[Code]
[UIoU Challenge]
[BibTeX]
[arXiv]
Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
Christos Sakaridis,
Dengxin Dai,
and Luc Van Gool
International Conference on Computer Vision (ICCV), 2019
[PDF]
[Supplement]
[BibTeX]
[arXiv]
[UIoU Challenge]
Code
The source code for our MGCDA method is available on github.
Pretrained Models
The above models are .mat files generated by and compatible with MATLAB R2016b.
Dark Zurich Dataset
Dark Zurich is an image dataset containing a total of 8779 images captured at nighttime, twilight, and daytime, along with the respective GPS coordinates of the camera for each image. These GPS annotations are used to construct cross-time-of-day correspondences, i.e., to match each nighttime or twilight image to its daytime counterpart.
These attributes allow the usage of Dark Zurich as a dataset to build models and systems that perform:
- domain adaptation (unsupervised, weakly supervised or semi-supervised), e.g. for semantic segmentation or object detection,
- image translation / style transfer to different times of day,
- robust image matching / visual localization across diverse domains, and
- other visual perception tasks that are central for autonomous vehicles and other robotic applications.
On the evaluation side, 201 nighttime images in Dark Zurich (151 test + 50 validation) come with fine pixel-level semantic annotations for the 19 evaluation classes of Cityscapes. What is more, our novel protocol for annotation with privileged information by leveraging corresponding daytime images enables to assign reliable semantic labels to originally indiscernible image regions beyond human recognition capability and to indeed include such invalid regions in the evaluation jointly with valid regions.
Moreover, each image is annotated with a binary mask which designates its invalid regions. These additional invalid mask annotations enable the application of our novel uncertainty-aware evaluation framework for semantic segmentation, which is highlighted by UIoU (or uncertainty-aware IoU), a new performance metric that generalizes standard IoU and allows the selective invalidation of predictions, which is crucial for safety-oriented systems handling inputs with potentially ambiguous content, as in the adverse conditions scenario. UIoU rewards models which place higher confidence on valid regions than on invalid ones, i.e. exhibit consistent behavior with human annotators.
Consequently, Dark Zurich is suited for
- the novel UIoU semantic segmentation evaluation, and
- standard IoU semantic segmentation evaluation.
Downloads
-
Dark_Zurich_train_anon.zip (16 GB)
Training set with 8377 images (3041 daytime, 2920 twilight, 2416 nighttime), GPS annotations and image-level cross-time-of-day correspondences
MD5 checksum -
Dark_Zurich_val_anon.zip (200 MB)
Validation set with 50 annotated nighttime images and their corresponding 50 daytime counterparts, plus GPS annotations for both
MD5 checksum -
Dark_Zurich_test_anon_withoutGt.zip (600 MB)
Test set with 151 nighttime images and their corresponding 151 daytime counterparts, plus GPS annotations for both
MD5 checksum
The 151 nighttime test set annotations are withheld, so that they serve permanently as an objective benchmark for the task of semantic nighttime image segmentation. A public evaluation website on CodaLab, associated with the UIoU Dark Zurich Challenge @ Vision for All Seasons workshop at CVPR 2020, is used to report in a leaderboard the performance of submitted test set predictions. The required format for submitting predictions is detailed on the evaluation website.
Note: due to data protection regulations, the original RGB images cannot be made available if the dataset user is not properly registered for their usage. In any case, anonymized versions of the images - with minimal modifications - are also provided and can be used alternatively. A registration form for researchers who would like to get access to the original images will be made available soon on this website.
Citation
If you find our work useful in your research, please cite our publications: T-PAMI paper | ICCV paper