Domain Adaptation
1989 papers with code • 54 benchmarks • 88 datasets
Domain Adaptation is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions.
Further readings:
( Image credit: Unsupervised Image-to-Image Translation Networks )
Libraries
Use these libraries to find Domain Adaptation models and implementationsSubtasks
Latest papers with no code
Domain Adaptation for Learned Image Compression with Supervised Adapters
In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains.
MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition
The proposed MDDD includes four main modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning.
Source-free Domain Adaptation for Video Object Detection Under Adverse Image Conditions
When deploying pre-trained video object detectors in real-world scenarios, the domain gap between training and testing data caused by adverse image conditions often leads to performance degradation.
DAWN: Domain-Adaptive Weakly Supervised Nuclei Segmentation via Cross-Task Interactions
However, the current weakly supervised nuclei segmentation approaches typically follow a two-stage pseudo-label generation and network training process.
Domain adaptive pose estimation via multi-level alignment
Specifically, we first utilize image style transer to ensure that images from the source and target domains have a similar distribution.
Adaptive Prompt Learning with Negative Textual Semantics and Uncertainty Modeling for Universal Multi-Source Domain Adaptation
Universal Multi-source Domain Adaptation (UniMDA) transfers knowledge from multiple labeled source domains to an unlabeled target domain under domain shifts (different data distribution) and class shifts (unknown target classes).
PARAMANU-GANITA: Language Model with Mathematical Capabilities
In the end, we want to point out that we have only trained Paramanu-Ganita only on a part of our entire mathematical corpus and yet to explore the full potential of our model.
UrbanCross: Enhancing Satellite Image-Text Retrieval with Cross-Domain Adaptation
Urbanization challenges underscore the necessity for effective satellite image-text retrieval methods to swiftly access specific information enriched with geographic semantics for urban applications.
Self-Supervised Monocular Depth Estimation in the Dark: Towards Data Distribution Compensation
In this paper, we propose a self-supervised nighttime monocular depth estimation method that does not use any night images during training.
MM-PhyRLHF: Reinforcement Learning Framework for Multimodal Physics Question-Answering
We employ the LLaVA open-source model to answer multimodal physics MCQs and compare the performance with and without using RLHF.