Learning with noisy labels

121 papers with code • 18 benchmarks • 13 datasets

Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.

Libraries

Use these libraries to find Learning with noisy labels models and implementations

Most implemented papers

Sharpness-Aware Minimization for Efficiently Improving Generalization

google-research/sam ICLR 2021

In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability.

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

bhanML/Co-teaching NeurIPS 2018

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

AlanChou/Truncated-Loss NeurIPS 2018

Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE.

Symmetric Cross Entropy for Robust Learning with Noisy Labels

YisenWang/symmetric_cross_entropy_for_noisy_labels ICCV 2019

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Confident Learning: Estimating Uncertainty in Dataset Labels

cleanlab/cleanlab 31 Oct 2019

Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence.

Normalized Loss Functions for Deep Learning with Noisy Labels

HanxunH/Active-Passive-Losses ICML 2020

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise

hongxin001/ODNL NeurIPS 2021

Learning with noisy labels is a practically challenging problem in weakly supervised learning.

How does Disagreement Help Generalization against Label Corruption?

xingruiyu/coteaching_plus 14 Jan 2019

Learning with noisy labels is one of the hottest problems in weakly-supervised learning.

Probabilistic End-to-end Noise Correction for Learning with Noisy Labels

yikun2019/PENCIL CVPR 2019

Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels.

Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach

giorgiop/loss-correction CVPR 2017

We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise.