Self-Supervised Training with Autoencoders for Visual Anomaly Detection

23 Jun 2022  ·  Alexander Bauer, Shinichi Nakajima, Klaus-Robert Müller ·

Recently, deep auto-encoders have been used for the task of anomaly detection in the visual domain. By optimising for the reconstruction error using anomaly-free examples, the common belief is that a corresponding network should fail to accurately reconstruct anomalous regions in the application phase. This goal is typically addressed by controlling the capacity of the network, either by reducing the size of the bottleneck layer or by enforcing sparsity constraints on its activations. However, neither of these techniques does explicitly penalise reconstruction of anomalous signals often resulting in poor detection. We tackle this problem by adapting a self-supervised learning regime that allows the use of discriminative information during training but focuses on the data manifold of normal examples. Precisely, we investigate two different training objectives inspired by the task of neural image inpainting. Our main objective regularises the model to produce locally consistent reconstructions, while replacing irregularities, therefore, acting as a filter that removes anomalous patterns. Our formal analysis shows that under mild conditions the corresponding model resembles a non-linear orthogonal projection of partially corrupted images onto the manifold of uncorrupted (defect-free) examples. This insight makes the reconstruction error a natural choice for defining the anomaly score of a sample according to its distance from a corresponding projection on the data manifold. We emphasise that inference with our approach is very efficient during training and prediction requiring a single forward pass for each input image. Our experiments on the MVTec AD dataset demonstrate high detection and localisation performance. On the texture-subset, in particular, our approach consistently outperforms recent anomaly detection methods by a significant margin.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods