Confidence Regularized Self-Training

Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation DensePASS CRST mIoU 31.67% # 24
Synthetic-to-Real Translation GTAV-to-Cityscapes Labels CRST(MRKLD-SP-MST) mIoU 49.8 # 48
Domain Adaptation Office-31 MRKLD + LRENT Average Accuracy 86.8 # 22
Image-to-Image Translation SYNTHIA-to-Cityscapes LRENT (DeepLabv2) mIoU (13 classes) 48.7 # 18
Domain Adaptation VisDA2017 CRST Accuracy 78.1 # 17
Domain Adaptation VisDA2017 MRKLD + LRENT Accuracy 78.1 # 17

Methods