Does label smoothing mitigate label noise?

Label smoothing is commonly used in training deep learning models, wherein one-hot training labels are mixed with uniform label vectors. Empirically, smoothing has been shown to improve both predictive performance and model calibration. In this paper, we study whether label smoothing is also effective as a means of coping with label noise. While label smoothing apparently amplifies this problem --- being equivalent to injecting symmetric noise to the labels --- we show how it relates to a general family of loss-correction techniques from the label noise literature. Building on this connection, we show that label smoothing is competitive with loss-correction under label noise. Further, we show that when distilling models from noisy data, label smoothing of the teacher is beneficial; this is in contrast to recent findings for noise-free problems, and sheds further light on settings where label smoothing is beneficial.

PDF Abstract ICML 2020 PDF
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N Positive-LS Accuracy (mean) 55.84 # 20
Learning with noisy labels CIFAR-10N-Aggregate Positive-LS Accuracy (mean) 91.57 # 14
Learning with noisy labels CIFAR-10N-Random1 Positive-LS Accuracy (mean) 89.80 # 14
Learning with noisy labels CIFAR-10N-Random2 Positive-LS Accuracy (mean) 89.35 # 15
Learning with noisy labels CIFAR-10N-Random3 Positive-LS Accuracy (mean) 89.82 # 12
Learning with noisy labels CIFAR-10N-Worst Positive-LS Accuracy (mean) 82.76 # 16

Methods