Domain Generalization
633 papers with code • 19 benchmarks • 25 datasets
The idea of Domain Generalization is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain
Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning
Libraries
Use these libraries to find Domain Generalization models and implementationsDatasets
Latest papers
DGMamba: Domain Generalization via Generalized State Space Model
SPR strives to encourage the model to concentrate more on objects rather than context, consisting of two designs: Prior-Free Scanning~(PFS), and Domain Context Interchange~(DCI).
FAIRM: Learning invariant representations for algorithmic fairness and domain generalization with minimax optimality
Machine learning methods often assume that the test data have the same distribution as the training data.
Language Guided Domain Generalized Medical Image Segmentation
Incorporating text features alongside visual features is a potential solution to enhance the model's understanding of the data, as it goes beyond pixel-level information to provide valuable context.
Prompt Learning via Meta-Regularization
Recently, prompt learning approaches have been explored to efficiently and effectively adapt the vision-language models to a variety of downstream tasks.
Generative Medical Segmentation
Concretely, GMS employs a robust pre-trained Variational Autoencoder (VAE) to derive latent representations of both images and masks, followed by a mapping model that learns the transition from image to mask in the latent space.
MatchSeg: Towards Better Segmentation via Reference Image Matching
Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set.
DomainLab: A modular Python package for domain generalization in deep learning
DomainLab is a modular Python package for training user specified neural networks with composable regularization loss terms.
M-HOF-Opt: Multi-Objective Hierarchical Output Feedback Optimization via Multiplier Induced Loss Landscape Scheduling
We address the online combinatorial choice of weight multipliers for multi-objective optimization of many loss terms parameterized by neural works via a probabilistic graphical model (PGM) for the joint model parameter and multiplier evolution process, with a hypervolume based likelihood promoting multi-objective descent.
Negative Yields Positive: Unified Dual-Path Adapter for Vision-Language Models
Recently, large-scale pre-trained Vision-Language Models (VLMs) have demonstrated great potential in learning open-world visual representations, and exhibit remarkable performance across a wide range of downstream tasks through efficient fine-tuning.
Towards Generalizing to Unseen Domains with Few Labels
Existing domain generalization (DG) methods which are unable to exploit unlabeled data perform poorly compared to semi-supervised learning (SSL) methods under SSDG setting.