CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances

Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this paper, we propose a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. Code and pre-trained models are available at https://github.com/alinlab/CSI.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Anomaly Detection Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 CSI ROC-AUC 94.7 # 2
Network ResNet-18 # 1
Anomaly Detection Anomaly Detection on Unlabeled CIFAR-10 vs LSUN (Fix) CSI ROC-AUC 90.3 # 6
Network ResNet-18 # 1
Anomaly Detection Anomaly Detection on Unlabeled ImageNet-30 vs CUB-200 CSI ROC-AUC 71.5 # 5
Network ResNet-18 # 1
Anomaly Detection One-class CIFAR-10 CSI AUROC 94.3 # 10
Anomaly Detection One-class CIFAR-100 CSI AUROC 89.6 # 5
Anomaly Detection One-class ImageNet-30 CSI AUROC 91.6 # 4
Anomaly Detection Unlabeled CIFAR-10 vs CIFAR-100 CSI AUROC 89.3 # 7
Network ResNet-18 # 1

Methods


No methods listed for this paper. Add relevant methods here