AnoShift (AnoShift: A Distribution Shift Benchmark for Unsupervised Anomaly Detection)

Introduced by Dragoi et al. in AnoShift: A Distribution Shift Benchmark for Unsupervised Anomaly Detection

AnoShift is a large-scale anomaly detection benchmark, which focuses on splitting the test data based on its temporal distance to the training set, introducing three testing splits: IID, NEAR, and FAR. This testing scenario proves to capture the in-time performance degradation of anomaly detection methods for classical to masked language models.

AnoShift benchmark aims to enable a better estimate of the anomaly detection model’s performance, under natural distribution shifts that occur over time in the input, closer to the real-world performance, leading to more robust anomaly detection algorithms.

The benchmark is based on the Kyoto-2016 dataset (https://www.takakura.com/Kyoto_data/).

Papers


Paper Code Results Date Stars

Dataset Loaders


Tasks


Similar Datasets


License


Modalities


Languages