Correlated Input-Dependent Label Noise in Large-Scale Image Classification

Large scale image classification datasets often contain noisy labels. We take a principled probabilistic approach to modelling input-dependent, also known as heteroscedastic, label noise in these datasets. We place a multivariate Normal distributed latent variable on the final hidden layer of a neural network classifier. The covariance matrix of this latent variable, models the aleatoric uncertainty due to label noise. We demonstrate that the learned covariance structure captures known sources of label noise between semantically similar and co-occurring classes. Compared to standard neural network training and other baselines, we show significantly improved accuracy on Imagenet ILSVRC 2012 79.3% (+2.6%), Imagenet-21k 47.0% (+1.1%) and JFT 64.7% (+1.6%). We set a new state-of-the-art result on WebVision 1.0 with 76.6% top-1 accuracy. These datasets range from over 1M to over 300M training examples and from 1k classes to more than 21k classes. Our method is simple to use, and we provide an implementation that is a drop-in replacement for the final fully-connected layer in a deep classifier.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification ImageNet Heteroscedastic (InceptionResNet-v2) Top 1 Accuracy 68.6% # 956
Hardware Burden None # 1
Operations per network pass None # 1
Image Classification WebVision-1000 Heteroscedastic (InceptionResNet-v2) Top-1 Accuracy 76.6% # 5
Top-5 Accuracy 92.1% # 3

Methods


No methods listed for this paper. Add relevant methods here