Pair-based Self-Distillation for Semi-supervised Domain Adaptation

1 Jan 2021  ·  Jeongbeen Yoon, Dahyun Kang, Minsu Cho ·

Semi-supervised domain adaptation (SSDA) is to adapt a learner to a new domain with only a small set of labeled samples when a large labeled dataset is given on a source domain. In this paper, we propose a pair-based SSDA method that adapts a learner to the target domain using self-distillation with sample pairs. Our method composes the sample pair by selecting a teacher sample from a labeled dataset (i.e., source or labeled target) and its student sample from an unlabeled dataset (i.e., unlabeled target), and then minimizes the output discrepancy between the two samples. We assign a reliable student to a teacher using pseudo-labeling and reliability evaluation so that the teacher sample propagates its prediction to the corresponding student sample. When the teacher sample is chosen from the source dataset, it minimizes the discrepancy between the source domain and the target domain. When the teacher sample is selected from the labeled target dataset, it reduces the discrepancy within the target domain. Experimental evaluation on standard benchmarks shows that our method effectively minimizes both the inter-domain and intra-domain discrepancies, thus achieving the state-of-the-art results.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here