TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation

19 Apr 2020  ·  Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe, Elisa Ricci ·

Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset. However, in practical applications, we typically have access to multiple sources. In this paper we propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks. Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style (characterized in terms of low-level features variations) and the content. For this reason we propose to project the image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style. In this way, new labeled images can be generated which are used to train a final target classifier. We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Source Unsupervised Domain Adaptation Digits-five TriGAN Accuracy 90.08 # 5
Multi-Source Unsupervised Domain Adaptation Office-Caltech10 TriGAN Accuracy 97.0 # 5

Methods


No methods listed for this paper. Add relevant methods here