Partial Label Supervision for Agnostic Generative Noisy Label Learning

2 Aug 2023  ·  Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro ·

Noisy label learning has been tackled with both discriminative and generative approaches. Despite the simplicity and efficiency of discriminative methods, generative models offer a more principled way of disentangling clean and noisy labels and estimating the label transition matrix. However, existing generative methods often require inferring additional latent variables through costly generative modules or heuristic assumptions, which hinder adaptive optimisation for different causal directions. They also assume a uniform clean label prior, which does not reflect the sample-wise clean label distribution and uncertainty. In this paper, we propose a novel framework for generative noisy label learning that addresses these challenges. First, we propose a new single-stage optimisation that directly approximates image generation by a discriminative classifier output. This approximation significantly reduces the computation cost of image generation, preserves the generative modelling benefits, and enables our framework to be agnostic in regards to different causality scenarios (i.e., image generate label or vice-versa). Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty. The supervision of PLS does not merely aim at minimising loss, but seeks to capture the underlying sample-wise clean label distribution and uncertainty. Extensive experiments on computer vision and natural language processing (NLP) benchmarks demonstrate that our generative modelling achieves state-of-the-art results while significantly reducing the computation cost. Our code is available at https://github.com/lfb-1/GNL.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels ANIMAL GNL Accuracy 85.9 # 7
Network Vgg-19-BN # 1
ImageNet Pretrained NO # 1
Learning with noisy labels CIFAR-10N-Aggregate GNL Accuracy (mean) 92.57 # 9
Learning with noisy labels CIFAR-10N-Random1 GNL Accuracy (mean) 91.97 # 7
Learning with noisy labels CIFAR-10N-Random2 GNL Accuracy (mean) 91.42 # 6
Learning with noisy labels CIFAR-10N-Random3 GNL Accuracy (mean) 91.83 # 5
Learning with noisy labels CIFAR-10N-Worst GNL Accuracy (mean) 86.99 # 8

Methods


No methods listed for this paper. Add relevant methods here