Stack of discriminative autoencoders for multiclass anomaly detection in endoscopy images

15 Mar 2021  ·  Mohammad Reza Mohebbian, Khan A. Wahid, Paul Babyn ·

Wireless Capsule Endoscopy (WCE) helps physicians examine the gastrointestinal (GI) tract noninvasively. There are few studies that address pathological assessment of endoscopy images in multiclass classification and most of them are based on binary anomaly detection or aim to detect a specific type of anomaly. Multiclass anomaly detection is challenging, especially when the dataset is poorly sampled or imbalanced. Many available datasets in endoscopy field, such as KID2, suffer from an imbalance issue, which makes it difficult to train a high-performance model. Additionally, increasing the number of classes makes classification more difficult. We proposed a multiclass classification algorithm that is extensible to any number of classes and can handle an imbalance issue. The proposed method uses multiple autoencoders where each one is trained on one class to extract features with the most discrimination from other classes. The loss function of autoencoders is set based on reconstruction, compactness, distance from other classes, and Kullback-Leibler (KL) divergence. The extracted features are clustered and then classified using an ensemble of support vector data descriptors. A total of 1,778 normal, 227 inflammation, 303 vascular, and 44 polyp images from the KID2 dataset are used for evaluation. The entire algorithm ran 5 times and achieved F1-score of 96.3 +- 0.2% and 85.0 +- 0.4% on the test set for binary and multiclass anomaly detection, respectively. The impact of each step of the algorithm was investigated by various ablation studies and the results were compared with published works. The suggested approach is a competitive option for detecting multiclass anomalies in the GI field.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods