f-AnoGAN: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks

Obtaining expert labels in clinical imaging is difficult since exhaustive annotation is time-consuming. Furthermore, not all possibly relevant markers may be known and sufficiently well described a priori to even guide annotation. While supervised learning yields good results if expert labeled training data is available, the visual variability, and thus the vocabulary of findings, we can detect and exploit, is limited to the annotated lesions. Here, we present fast AnoGAN (f-AnoGAN), a generative adversarial network (GAN) based unsupervised learning approach capable of identifying anomalous images and image segments, that can serve as imaging biomarker candidates. We build a generative model of healthy training data, and propose and evaluate a fast mapping technique of new data to the GAN’s latent space. The mapping is based on a trained encoder, and anomalies are detected via a combined anomaly score based on the building blocks of the trained model – comprising a discriminator feature residual error and an image reconstruction error. In the experiments on optical coherence tomography data, we compare the proposed method with alternative approaches, and provide comprehensive empirical evidence that f-AnoGAN outperforms alternative approaches and yields high anomaly detection accuracy. In addition, a visual Turing test with two retina experts showed that the generated images are indistinguishable from real normal retinal OCT images. The f-AnoGAN code is available at https://github.com/tSchlegl/f-AnoGAN.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Anomaly Detection Hyper-Kvasir Dataset F-Anogan AUC 0.907 # 5
Anomaly Detection LAG F-anoGAN AUC 0.778 # 4
Anomaly Detection MVTec LOCO AD f-AnoGAN Avg. Detection AUROC 64.2 # 31
Detection AUROC (only logical) 65.8 # 31
Detection AUROC (only structural) 62.7 # 31
Segmentation AU-sPRO (until FPR 5%) 33.4 # 21

Methods