Visual Domain Adaptation with Manifold Embedded Distribution Alignment

19 Jul 2018  ·  Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu ·

Visual domain adaptation aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning. However, there are two significant challenges: (1) degenerated feature transformation, which means that distribution alignment is often performed in the original feature space, where feature distortions are hard to overcome. On the other hand, subspace learning is not sufficient to reduce the distribution divergence. (2) unevaluated distribution alignment, which means that existing distribution alignment methods only align the marginal and conditional distributions with equal importance, while they fail to evaluate the different importance of these two distributions in real applications. In this paper, we propose a Manifold Embedded Distribution Alignment (MEDA) approach to address these challenges. MEDA learns a domain-invariant classifier in Grassmann manifold with structural risk minimization, while performing dynamic distribution alignment to quantitatively account for the relative importance of marginal and conditional distributions. To the best of our knowledge, MEDA is the first attempt to perform dynamic distribution alignment for manifold domain adaptation. Extensive experiments demonstrate that MEDA shows significant improvements in classification accuracy compared to state-of-the-art traditional and deep methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation Office-Caltech MEDA[[Wang et al.2018]] Average Accuracy 92.8 # 2
Domain Adaptation Office-Caltech-10 MEDA Accuracy (%) 92.8 # 1
Transfer Learning Office-Home MEDA Accuracy 60.3 # 5

Methods


No methods listed for this paper. Add relevant methods here