Out-of-Distribution Detection
328 papers with code • 51 benchmarks • 23 datasets
Detect out-of-distribution or anomalous examples.
Libraries
Use these libraries to find Out-of-Distribution Detection models and implementationsDatasets
Most implemented papers
SSD: A Unified Framework for Self-Supervised Outlier Detection
We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin.
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.
Natural Synthetic Anomalies for Self-Supervised Anomaly Detection and Localization
We introduce a simple and intuitive self-supervision task, Natural Synthetic Anomalies (NSA), for training an end-to-end model for anomaly detection and localization using only normal training data.
Generalized Out-of-Distribution Detection: A Survey
In this survey, we first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems, i. e., AD, ND, OSR, OOD detection, and OD.
Out of Distribution Detection via Neural Network Anchoring
Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection.
OpenOOD: Benchmarking Generalized Out-of-Distribution Detection
Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications and has thus been extensively studied, with a plethora of methods developed in the literature.
Probabilistic MIMO U-Net: Efficient and Accurate Uncertainty Estimation for Pixel-wise Regression
For that purpose, we adapted the U-Net architecture to train multiple subnetworks within a single model, harnessing the overparameterization in deep neural networks.
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks
In this work, we discuss a hitherto untouched aspect of mixup training -- the calibration and predictive uncertainty of models trained with mixup.
Detecting Underspecification with Local Ensembles
We present local ensembles, a method for detecting underspecification -- when many possible predictors are consistent with the training data and model class -- at test time in a pre-trained model.
Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data
Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise.