Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
Ranked #1 on Domain Adaptation on UCF-to-Olympic
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.
Ranked #3 on Domain Adaptation on SYNSIG-to-GTSRB
In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.
Ranked #8 on Domain Adaptation on Office-Caltech
To achieve this goal, an exemplar memory is introduced to store features of the target domain and accommodate the three invariance properties.
In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner.
Ranked #1 on Unsupervised Person Re-Identification on DukeMTMC-reID->Market-1501 (mAP metric)