Domain Generalization

633 papers with code • 19 benchmarks • 25 datasets

The idea of Domain Generalization is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain

Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning

Libraries

Use these libraries to find Domain Generalization models and implementations

Most implemented papers

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

clovaai/CutMix-PyTorch ICCV 2019

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.

Improved Regularization of Convolutional Neural Networks with Cutout

uoguelph-mlrg/Cutout 15 Aug 2017

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.

Bag of Tricks for Image Classification with Convolutional Neural Networks

dmlc/gluon-cv CVPR 2019

Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods.

Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net

XingangPan/IBN-Net ECCV 2018

IBN-Net carefully integrates Instance Normalization (IN) and Batch Normalization (BN) as building blocks, and can be wrapped into many advanced deep networks to improve their performances.

RandAugment: Practical automated data augmentation with a reduced search space

rwightman/pytorch-image-models NeurIPS 2020

Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size.

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

google-research/augmix ICLR 2020

We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.

Invariant Risk Minimization

facebookresearch/InvariantRiskMinimization 5 Jul 2019

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

hendrycks/robustness ICLR 2019

Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

A Closer Look at Few-shot Classification

wyharveychen/CloserLookFewShot ICLR 2019

Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.

Learning to Prompt for Vision-Language Models

kaiyangzhou/coop 2 Sep 2021

Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks.