Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation

Training deep networks for semantic segmentation requires large amounts of labeled training data, which presents a major challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue, we present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences. In particular, we propose three key contributions: (1) We transfer knowledge from features learned during self-supervised depth estimation to semantic segmentation, (2) we implement a strong data augmentation by blending images and labels using the geometry of the scene, and (3) we utilize the depth feature diversity as well as the level of difficulty of learning depth in a student-teacher framework to select the most useful samples to be annotated for semantic segmentation. We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains, and we achieve state-of-the-art results for semi-supervised semantic segmentation. The implementation is available at https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semi-Supervised Semantic Segmentation Cityscapes 100 samples labeled SegSDE (MTL decoder with ResNet101, ImageNet pretrained, unlabeled image sequences) Validation mIoU 62.09% # 4
Semi-Supervised Semantic Segmentation Cityscapes 12.5% labeled SegSDE (MTL decoder with ResNet101, ImageNet pretrained, unlabeled image sequences) Validation mIoU 68.01% # 17
Semi-Supervised Semantic Segmentation Cityscapes 25% labeled SegSDE (MTL decoder with ResNet101, ImageNet pretrained, unlabeled image sequences) Validation mIoU 69.38% # 15

Methods


No methods listed for this paper. Add relevant methods here