Domain Adaptation

1999 papers with code • 54 benchmarks • 88 datasets

Domain Adaptation is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions.

Further readings:

( Image credit: Unsupervised Image-to-Image Translation Networks )

Libraries

Use these libraries to find Domain Adaptation models and implementations

Latest papers with no code

M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training

no code yet • 26 Apr 2024

Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches.

Federated Transfer Component Analysis Towards Effective VNF Profiling

no code yet • 26 Apr 2024

The increasing concerns of knowledge transfer and data privacy challenge the traditional gather-and-analyse paradigm in networks.

360SFUDA++: Towards Source-free UDA for Panoramic Segmentation by Learning Reliable Category Prototypes

no code yet • 25 Apr 2024

However, as the distinct projections make it less possible to directly transfer knowledge between domains, we then propose Reliable Panoramic Prototype Adaptation Module (RP2AM) to transfer knowledge at both prediction and prototype levels.

Style Adaptation for Domain-adaptive Semantic Segmentation

no code yet • 25 Apr 2024

Unsupervised Domain Adaptation (UDA) refers to the method that utilizes annotated source domain data and unlabeled target domain data to train a model capable of generalizing to the target domain data.

Domain Adaptation for Learned Image Compression with Supervised Adapters

no code yet • 24 Apr 2024

In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains.

MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition

no code yet • 24 Apr 2024

The proposed MDDD includes four main modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning.

The Over-Certainty Phenomenon in Modern UDA Algorithms

no code yet • 24 Apr 2024

When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift.

Source-free Domain Adaptation for Video Object Detection Under Adverse Image Conditions

no code yet • 23 Apr 2024

When deploying pre-trained video object detectors in real-world scenarios, the domain gap between training and testing data caused by adverse image conditions often leads to performance degradation.

DAWN: Domain-Adaptive Weakly Supervised Nuclei Segmentation via Cross-Task Interactions

no code yet • 23 Apr 2024

However, the current weakly supervised nuclei segmentation approaches typically follow a two-stage pseudo-label generation and network training process.

Adaptive Prompt Learning with Negative Textual Semantics and Uncertainty Modeling for Universal Multi-Source Domain Adaptation

no code yet • 23 Apr 2024

Universal Multi-source Domain Adaptation (UniMDA) transfers knowledge from multiple labeled source domains to an unlabeled target domain under domain shifts (different data distribution) and class shifts (unknown target classes).