Domain Generalization
639 papers with code • 18 benchmarks • 24 datasets
The idea of Domain Generalization is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain
Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning
Libraries
Use these libraries to find Domain Generalization models and implementationsDatasets
Most implemented papers
ResNet strikes back: An improved training procedure in timm
We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work.
DINOv2: Learning Robust Visual Features without Supervision
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.
On the limits of cross-domain generalization in automated X-ray prediction
This large scale study focuses on quantifying what X-rays diagnostic prediction tasks generalize well across multiple different datasets.
In Search of Lost Domain Generalization
As a first step, we realize that model selection is non-trivial for domain generalization tasks.
Conditional Prompt Learning for Vision-Language Models
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets.
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups.
Self-Challenging Improves Cross-Domain Generalization
We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes.
Making Convolutional Networks Shift-Invariant Again
The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling.