Unsupervised Domain Adaptation
733 papers with code • 36 benchmarks • 31 datasets
Unsupervised Domain Adaptation is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
Source: Domain-Specific Batch Normalization for Unsupervised Domain Adaptation
Libraries
Use these libraries to find Unsupervised Domain Adaptation models and implementationsDatasets
Latest papers
Cooperative Students: Navigating Unsupervised Domain Adaptation in Nighttime Object Detection
Unsupervised Domain Adaptation (UDA) has shown significant advancements in object detection under well-lit conditions; however, its performance degrades notably in low-visibility scenarios, especially at night, posing challenges not only for its adaptability in low signal-to-noise ratio (SNR) conditions but also for the reliability and efficiency of automated vehicles.
Weakly-Supervised Cross-Domain Segmentation of Electron Microscopy with Sparse Point Annotation
To address these issues, we investigate a highly annotation-efficient weak supervision, which assumes only sparse center-points on a small subset of object instances in the target training images.
Learning CNN on ViT: A Hybrid Model to Explicitly Class-specific Boundaries for Domain Adaptation
Compared to conventional DA methods, our ECB achieves superior performance, which verifies its effectiveness in this hybrid model.
CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning
SAVPT features a novel metric Severity that divides all adverse scene images into low-severity and high-severity images.
UADA3D: Unsupervised Adversarial Domain Adaptation for 3D Object Detection with Sparse LiDAR and Large Domain Gaps
In this study, we address a gap in existing unsupervised domain adaptation approaches on LiDAR-based 3D object detection, which have predominantly concentrated on adapting between established, high-density autonomous driving datasets.
Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation
Typically, various prevailing methods baseline rely on constructing intermediate domains via cross-domain mixed sampling techniques to mitigate the performance decline caused by domain gaps.
Confusing Pair Correction Based on Category Prototype for Domain Adaptation under Noisy Environments
In this paper, we address unsupervised domain adaptation under noisy environments, which is more challenging and practical than traditional domain adaptation.
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
We address these problems by introducing: (1) A unified benchmarking and implementation framework, Align and Distill (ALDI), enabling comparison of DAOD methods and supporting future development, (2) A fair and modern training and evaluation protocol for DAOD that addresses benchmarking pitfalls, (3) A new DAOD benchmark dataset, CFC-DAOD, enabling evaluation on diverse real-world data, and (4) A new method, ALDI++, that achieves state-of-the-art results by a large margin.
Uncertainty-Aware Pseudo-Label Filtering for Source-Free Unsupervised Domain Adaptation
Source-free unsupervised domain adaptation (SFUDA) aims to enable the utilization of a pre-trained source model in an unlabeled target domain without access to source data.
Visual Foundation Models Boost Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation
Then, another VFM trained on fine-grained 2D masks is adopted to guide the generation of semantically augmented images and point clouds to enhance the performance of neural networks, which mix the data from source and target domains like view frustums (FrustumMixing).