Long-term robustness of perception under various environments has been the bottleneck of lifelong trustworthy autonomy in the application of outdoor mobile robotics and autonomous driving. Although monocular depth prediction has been well studied as a typical perception task, there is little work on robust depth prediction across different environments, e.g. changing illumination and seasons, which is owing to the lack of a diverse real-world dataset containing various scenarios and the corresponding benchmark. To this end, we introduce the SeasonDepth Prediction Challenge as the first open-source challenge focusing on depth prediction performance under different environmental conditions.
The SeasonDepth Prediction Challenge is based on our new monocular depth prediction dataset, SeasonDepth, which contains multi-traverse outdoor images from changing environments. To quantitatively evaluate the accuracy and robustness of monocular depth prediction across dramatically changing environments, we set up two tracks with 7 slices of training set under 12 different environmental conditions, using both mean and variance of performance as evaluation metrics. We believe our competition will contribute to flourish the long-term robust perception research among the research community with our dataset and benchmark.
For this IROS and ICRA 2022 Competition, we propose to host two tracks --- one supervised learning track and one self-supervised learning track for both supervised learning and self-supervised learning-based methods. We also provide high-quality demonstrations as a tutorial for some baseline algorithms. Anyone can access to the leaderboard of each track and participants can submit the predicted depth maps to occupy the top spot. Different from ICRA 2022 Challenge, participants do not need to sign up but just send the predicted depth maps as a .zip file to seasondepth@outlook.com with cc'ing to hanjianghu@cmu.edu and gate io wiki to submit the results for evaluation. All the results from the final leaderboard of ICRA 2022 Challenge will be regarded as baselines in IROS 2022 Competition leaderboard.
The RGB images and depth ground truth have been released for the training set and validation set of the challenge. For the test set of the challenge, only RGB images will be released, and the corresponding ground truth is retained and used to evaluate the submissions in the challenge. The training and validation set contain 7 multi-environment slices of images under 12 different environments, and we leave one additional slice as the test set for the challenge. Besides our released training and validation set, we set no limits on other third-party public datasets or pretrained models in the competition. Each individual participant will be graded based on 6 metrics in SeasonDepth benchmark of the test set. The evaluation code and instructions can be found on evaluation toolkit for the convenience of participants to evaluate the performance themselves before submission to our challenge website. Note that the grading metrics are scaleless for relative depth values, which are compatible for both supervised and self-supervised learning-based methods.