Paper

CLCIFAR: CIFAR-Derived Benchmark Datasets with Human Annotated Complementary Labels

Complementary-label learning (CLL) is a weakly-supervised learning paradigm that aims to train a multi-class classifier using only complementary labels, which indicate classes to which an instance does not belong. Despite numerous algorithmic proposals for CLL, their practical performance remains unclear for two reasons. Firstly, these algorithms often rely on assumptions about the generation of complementary labels. Secondly, their evaluation has been limited to synthetic datasets. To gain insights into the real-world performance of CLL algorithms, we developed a protocol to collect complementary labels annotated by human annotators. This effort resulted in the creation of two datasets, CLCIFAR10 and CLCIFAR20, derived from CIFAR10 and CIFAR100, respectively. These datasets, publicly released at https://github.com/ntucllab/complementary_cifar, represent the very first real-world CLL datasets. Through extensive benchmark experiments, we discovered a notable decline in performance when transitioning from synthetic datasets to real-world datasets. We conducted a dataset-level ablation study to investigate the key factors contributing to this decline. Our analyses highlighted annotation noise as the most influential factor present in the real-world datasets. Additionally, the biased nature of human-annotated complementary labels was found to make certain CLL algorithms more susceptible to overfitting. These findings suggest the community to spend more research effort on developing CLL algorithms that are robust to noisy and biased complementary-label distributions.

Results in Papers With Code
(↓ scroll down to see all results)