Adversary-Aware Partial label learning with Label distillation

2 Apr 2023  ·  Cheng Chen, Yueming Lyu, Ivor W. Tsang ·

To ensure that the data collected from human subjects is entrusted with a secret, rival labels are introduced to conceal the information provided by the participants on purpose. The corresponding learning task can be formulated as a noisy partial-label learning problem. However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space. To learn a more robust model, we present Adversary-Aware Partial Label Learning and introduce the $\textit{rival}$, a set of noisy labels, to the collection of candidate labels for each instance. By introducing the rival label, the predictive distribution of PLL is factorised such that a handy predictive label is achieved with less uncertainty coming from the transition matrix, assuming the rival generation process is known. Nonetheless, the predictive accuracy is still insufficient to produce an sufficiently accurate positive sample set to leverage the clustering effect of the contrastive loss function. Moreover, the inclusion of rivals also brings an inconsistency issue for the classifier and risk function due to the intractability of the transition matrix. Consequently, an adversarial teacher within momentum (ATM) disambiguation algorithm is proposed to cope with the situation, allowing us to obtain a provably consistent classifier and risk function. In addition, our method has shown high resiliency to the choice of the label noise transition matrix. Extensive experiments demonstrate that our method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here