Unsupervised Domain Adaptation
733 papers with code • 36 benchmarks • 31 datasets
Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
Source: Domain-Specific Batch Normalization for Unsupervised Domain Adaptation
Libraries
Use these libraries to find Unsupervised Domain Adaptation models and implementationsDatasets
Most implemented papers
Rescaling Egocentric Vision
This paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS.
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
Domain Separation Networks
However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain.
Temporal Attentive Alignment for Large-Scale Video Domain Adaptation
Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e. g. 7. 9% accuracy gain over "Source only" from 73. 9% to 81. 8% on "HMDB --> UCF", and 10. 3% gain on "Kinetics --> Gameplay").
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one.
GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval
This limits the usage of dense retrieval approaches to only a few domains with large training datasets.
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder.
Correlation Alignment for Unsupervised Domain Adaptation
In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces.
A DIRT-T Approach to Unsupervised Domain Adaptation
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.
DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation
In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e. g. same classes), but also different latent data structures (e. g. different acquisition conditions).