Domain Generalization

639 papers with code • 18 benchmarks • 24 datasets

The idea of Domain Generalization is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain

Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning

Libraries

Use these libraries to find Domain Generalization models and implementations

Most implemented papers

ResNet strikes back: An improved training procedure in timm

rwightman/pytorch-image-models NeurIPS Workshop ImageNet_PPF 2021

We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work.

DINOv2: Learning Robust Visual Features without Supervision

facebookresearch/dinov2 14 Apr 2023

The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.

Deep CORAL: Correlation Alignment for Deep Domain Adaptation

thuml/Transfer-Learning-Library 6 Jul 2016

CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.

On the limits of cross-domain generalization in automated X-ray prediction

mlmed/torchxrayvision MIDL 2019

This large scale study focuses on quantifying what X-rays diagnostic prediction tasks generalize well across multiple different datasets.

In Search of Lost Domain Generalization

facebookresearch/DomainBed ICLR 2021

As a first step, we realize that model selection is non-trivial for domain generalization tasks.

Conditional Prompt Learning for Vision-Language Models

kaiyangzhou/coop CVPR 2022

With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets.

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

kohpangwei/group_DRO 20 Nov 2019

Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups.

Self-Challenging Improves Cross-Domain Generalization

facebookresearch/DomainBed ECCV 2020

We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data.

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

rgeirhos/Stylized-ImageNet ICLR 2019

Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes.

Making Convolutional Networks Shift-Invariant Again

adobe/antialiased-cnns 25 Apr 2019

The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling.