Domain Adaptation for Structured Output via Discriminative Patch Representations

Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution through the construction of a clustered space. With such representations as guidance, we use an adversarial learning scheme to push the feature representations of target patches in the clustered space closer to the distributions of source patches. In addition, we show that our framework is complementary to existing domain adaptation techniques and achieves consistent improvements on semantic segmentation. Extensive ablations and results are demonstrated on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels Discriminative Patch mIoU 46.5 # 55
Image-to-Image Translation SYNTHIA-to-Cityscapes Discriminative Patch (ResNet-101) mIoU (13 classes) 46.5 # 22

Methods