Instance Paradigm Contrastive Learning for Domain Generalization

Domain Generalization (DG) aims to develop models that can learn from data in source domains and generalize to unseen target domains. Recently, some domain generalization algorithms have emerged, but most of them were designed with complex modules. Among all the prior methods under DG settings, contrastive learning has become a promising solution for simplicity and efficiency. However, existing contrastive learning neglects distribution shifts that causes severe domain confusions. In this paper, we propose an instance paradigm contrastive learning framework, introducing contrast between original features and novel paradigms to alleviate domain-specific distractions. And then we explore hard-pair information, an essential factor in contrastive learning, based on domain label and feature similarity. Moreover, to produce domain-invariant instance paradigms, we generate multiple views of the original images and design a novel channel-wise attention mechanism to dynamically combine features from all the views. Furthermore, a test-time feature integration module is designed to mimic the paradigms during the training process to improve generalization ability. Extensive experiments show that our method achieves state-of-the-art performance. The proposed algorithm can also serve as a plug-and-play module which improves performance of existing methods with a relatively large margin.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods