BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning

CVPR 2022  ·  Zhi Hou, Baosheng Yu, DaCheng Tao ·

Despite the success of deep neural networks, there are still many challenges in deep representation learning due to the data scarcity issues such as data imbalance, unseen distribution, and domain shift. To address the above-mentioned issues, a variety of methods have been devised to explore the sample relationships in a vanilla way (i.e., from the perspectives of either the input or the loss function), failing to explore the internal structure of deep neural networks for learning with sample relationships. Inspired by this, we propose to enable deep neural networks themselves with the ability to learn the sample relationships from each mini-batch. Specifically, we introduce a batch transformer module or BatchFormer, which is then applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training. By doing this, the proposed method enables the collaboration of different samples, e.g., the head-class samples can also contribute to the learning of the tail classes for long-tailed recognition. Furthermore, to mitigate the gap between training and testing, we share the classifier between with or without the BatchFormer during training, which can thus be removed during testing. We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications without any bells and whistles, including the tasks of long-tailed recognition, compositional zero-shot learning, domain generalization, and contrastive learning. Code will be made publicly available at https://github.com/zhihou7/BatchFormer.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=100) Paco + BatchFormer Error Rate 47.6 # 22
Long-tail Learning CIFAR-100-LT (ρ=100) Balanced + BatchFormer Error Rate 48.3 # 23
Long-tail Learning ImageNet-LT BatchFormer(ResNet-50, RIDE) Top-1 Accuracy 55.7 # 31
Long-tail Learning ImageNet-LT BatchFormer(ResNet-50, PaCo) Top-1 Accuracy 57.4 # 24
Long-tail Learning iNaturalist 2018 BatchFormer(ResNet-50, RIDE) Top-1 Accuracy 74.1% # 17
Domain Generalization PACS BatchFormer(ResNet-50, SWAD) Average Accuracy 88.6 # 19

Methods