A2B-GAN: Utilizing Unannotated Anomalous Images for Anomaly Detection in Medical Image Analysis

29 Sep 2021  ·  Md Mahfuzur Rahman Siddiquee, Teresa Wu, Baoxin Li ·

Automated anomaly detection in medical images can significantly reduce human effort in disease diagnosis. Owing to the complexity in modeling anomalies and the high cost of manual annotation by domain experts, a typical technique in the current literature is to employ only data from healthy subjects to derive the model for normal images and then to detect anomalies as outliers to this model. In many real applications, mixed datasets with both normal and potential abnormal images (e.g., images of patients with confirmed diseases) are abundant. This paper poses the research question of how to improve anomaly detection by using an unannotated set of mixed images of both normal and anomalous samples (in addition to a set of normal images from healthy subjects). We propose a novel one-directional image-to-image translation method named A2B-GAN, which learns to translate any images to only normal images (hence “one-directional”). This alleviates the requirement of direct cycle consistency of existing unpaired image-to-image translation methods, which is unattainable with unannotated data. Once the translation is learned, we generate a difference map for any given image by subtracting its translated output. Regions of significant responses in the difference map correspond to potential anomalies (if any). In terms of average AUC, our A2B-GAN outperforms the state-of-the-art methods by 0.1 points (approximately 16.25%) on two medical imaging datasets: COVID-19 detection and Cardiomegaly detection by utilizing an unannotated set mixed with anomalies. Our code is available for public release upon the paper decision.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here