Domain Adaptation

1983 papers with code • 54 benchmarks • 88 datasets

Domain Adaptation is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions.

Further readings:

( Image credit: Unsupervised Image-to-Image Translation Networks )

Libraries

Use these libraries to find Domain Adaptation models and implementations

Most implemented papers

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

jetpacapp/DeepBeliefSDK 6 Oct 2013

We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.

Unsupervised Image-to-Image Translation Networks

mingyuliutw/UNIT NeurIPS 2017

Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.

Wasserstein Distance Guided Representation Learning for Domain Adaptation

RockySJ/WDGRL 5 Jul 2017

Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL).

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

mil-tokyo/MCD_DA CVPR 2018

To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.

Domain Adaptive Faster R-CNN for Object Detection in the Wild

yuhuayc/da-faster-rcnn CVPR 2018

The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

Domain Adaptation for Structured Output via Discriminative Patch Representations

wasidennis/AdaptSegNet ICCV 2019

Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks.

Learning Generalisable Omni-Scale Representations for Person Re-Identification

KaiyangZhou/deep-person-reid 15 Oct 2019

An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation.

Deep Domain Confusion: Maximizing for Domain Invariance

erlendd/ddan 10 Dec 2014

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.

Deep Hashing Network for Unsupervised Domain Adaptation

hemanthdv/da-hash CVPR 2017

Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain.

Diverse Image-to-Image Translation via Disentangled Representations

HsinYingLee/DRIT ECCV 2018

Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time.