A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code

7 Oct 2023  ·  Zitai Wang, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming Huang ·

Real-world datasets are typically imbalanced in the sense that only a few classes have numerous samples, while many classes are associated with only a few samples. As a result, a naïve ERM learning process will be biased towards the majority classes, making it difficult to generalize to the minority classes. To address this issue, one simple but effective approach is to modify the loss function to emphasize the learning on minority classes, such as re-weighting the losses or adjusting the logits via class-dependent terms. However, existing generalization analysis of such losses is still coarse-grained and fragmented, failing to explain some empirical results. To bridge this gap, we propose a novel technique named data-dependent contraction to capture how these modified losses handle different classes. On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment in a unified manner. Furthermore, a principled learning algorithm is developed based on the theoretical insights. Finally, the empirical results on benchmark datasets not only validate the theoretical results but also demonstrate the effectiveness of the proposed method.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) VS + ADRW + TLA Error Rate 34.41 # 11
Long-tail Learning CIFAR-100-LT (ρ=100) VS + ADRW + TLA Error Rate 46.95 # 19
Long-tail Learning CIFAR-10-LT (ρ=10) VS + ADRW + TLA Error Rate 8.18 # 6
Long-tail Learning CIFAR-10-LT (ρ=100) VS + ADRW + TLA Error Rate 13.58 # 7

Methods


No methods listed for this paper. Add relevant methods here