Exploring Dual Model Knowledge Distillation for Anomaly Detection

Preprint 2023  ·  Thomine Simon, Snoussi Hichem ·

Unsupervised anomaly detection holds significant importance in large-scale industrial manufacturing. Recent methods have capitalized on the benefits of utilizing a classifier pretrained on natural images to extract representative features from specific layers. These extracted features are subsequently processed using various techniques. Notably, memory bank-based methods have demonstrated exceptional accuracy; however, they often incur a trade-off in terms of latency. This latency trade-off poses a challenge in real-time industrial applications where prompt anomaly detection and response are crucial. Indeed, alternative approaches such as knowledge distillation and normalized flow have demonstrated promising performance in unsupervised anomaly detection while maintaining low latency. In this paper, we aim to revisit the concept of knowledge distillation in the context of unsupervised anomaly detection, emphasizing the significance of feature selection. By employing distinctive features and leveraging different models, we intend to highlight the importance of carefully selecting and utilizing relevant features specifically tailored for the task of anomaly detection. This article introduces a novel approach based on dual model knowledge distillation for anomaly detection. The proposed method leverages both deep and shallow layers to incorporate various types of semantic information.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Anomaly Detection MVTec AD DualModel Detection AUROC 96.2 # 54
Anomaly Detection MVTEC AD textures DualModel Detection AUROC 99.94 # 1

Methods