Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations

22 May 2023  ยท  Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj ยท

Learning with reduced labeling standards, such as noisy label, partial label, and multiple label candidates, which we generically refer to as \textit{imprecise} labels, is a commonplace challenge in machine learning tasks. Previous methods tend to propose specific designs for every emerging imprecise label configuration, which is usually unsustainable when multiple configurations of imprecision coexist. In this paper, we introduce imprecise label learning (ILL), a framework for the unification of learning with various imprecise label configurations. ILL leverages expectation-maximization (EM) for modeling the imprecise label information, treating the precise labels as latent variables.Instead of approximating the correct labels for training, it considers the entire distribution of all possible labeling entailed by the imprecise information. We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings. Notably, ILL surpasses the existing specified techniques for handling imprecise labels, marking the first unified framework with robust and effective performance across various challenging settings. We hope our work will inspire further research on this topic, unleashing the full potential of ILL in wider scenarios where precise labels are expensive and complicated to obtain.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Partial Label Learning Caltech-UCSD Birds 200 (partial ratio 0.05) ILL Accuracy 70.77 # 1
Learning with noisy labels CIFAR-100N ILL Accuracy (mean) 65.84 # 6
Partial Label Learning CIFAR-100 (partial ratio 0.01) ILL Accuracy 75.31 # 1
Partial Label Learning CIFAR-100 (partial ratio 0.05) ILL Accuracy 74.58 # 1
Partial Label Learning CIFAR-100 (partial ratio 0.1) ILL Accuracy 74 # 1
Learning with noisy labels CIFAR-10N-Aggregate ILL Accuracy (mean) 95.47 # 4
Learning with noisy labels CIFAR-10N-Random1 ILL Accuracy (mean) 94.85 # 4
Learning with noisy labels CIFAR-10N-Random2 ILL Accuracy (mean) 95.04 # 2
Learning with noisy labels CIFAR-10N-Random3 ILL Accuracy (mean) 95.13 # 2
Learning with noisy labels CIFAR-10N-Worst ILL Accuracy (mean) 93.58 # 3
Partial Label Learning CIFAR-10 (partial ratio 0.1) ILL Accuracy 96.37 # 1
Partial Label Learning CIFAR-10 (partial ratio 0.3) ILL Accuracy 96.26 # 1
Partial Label Learning CIFAR-10 (partial ratio 0.5) ILL Accuracy 95.91 # 1
Learning with noisy labels Clothing1M ILL Test Accuracy 74.02 # 3
Learning with noisy labels mini WebVision 1.0 ILL Top 1 Accuracy 79.37 # 1

Methods


No methods listed for this paper. Add relevant methods here