Efficient Domain Generalization via Common-Specific Low-Rank Decomposition

Domain generalization refers to the task of training a model which generalizes to new domains that are not seen during training. We present CSD (Common Specific Decomposition), for this setting,which jointly learns a common component (which generalizes to new domains) and a domain specific component (which overfits on training domains). The domain specific components are discarded after training and only the common component is retained. The algorithm is extremely simple and involves only modifying the final linear classification layer of any given neural network architecture. We present a principled analysis to understand existing approaches, provide identifiability results of CSD,and study effect of low-rank on domain generalization. We show that CSD either matches or beats state of the art approaches for domain generalization based on domain erasure, domain perturbed data augmentation, and meta-learning. Further diagnostics on rotated MNIST, where domains are interpretable, confirm the hypothesis that CSD successfully disentangles common and domain specific components and hence leads to better domain generalization.

PDF Abstract ICML 2020 PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Generalization LipitK CSD (Ours) Accuracy 87.3 # 1
Domain Generalization PACS CSD (Resnet-18) Average Accuracy 80.69 # 84
Domain Generalization Rotated Fashion-MNIST CSD Accuracy 78.9 # 2

Methods


No methods listed for this paper. Add relevant methods here