Long-Tailed Classification with Gradual Balanced Loss and Adaptive Feature Generation

28 Feb 2022  ·  Zihan Zhang, Xiang Xiang ·

The real-world data distribution is essentially long-tailed, which poses great challenge to the deep model. In this work, we propose a new method, Gradual Balanced Loss and Adaptive Feature Generator (GLAG) to alleviate imbalance. GLAG first learns a balanced and robust feature model with Gradual Balanced Loss, then fixes the feature model and augments the under-represented tail classes on the feature level with the knowledge from well-represented head classes. And the generated samples are mixed up with real training samples during training epochs. Gradual Balanced Loss is a general loss and it can combine with different decoupled training methods to improve the original performance. State-of-the-art results have been achieved on long-tail datasets such as CIFAR100-LT, ImageNetLT, and iNaturalist, which demonstrates the effectiveness of GLAG for long-tailed visual recognition.

PDF Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) GLAG Error Rate 35.5 # 14
Long-tail Learning CIFAR-100-LT (ρ=100) GLAG Error Rate 48.3 # 23

Methods


No methods listed for this paper. Add relevant methods here