Transformer based Models for Unsupervised Anomaly Segmentation in Brain MR Images

5 Jul 2022  ·  Ahmed Ghorbel, Ahmed Aldahdooh, Shadi Albarqouni, Wassim Hamidouche ·

The quality of patient care associated with diagnostic radiology is proportionate to a physician workload. Segmentation is a fundamental limiting precursor to both diagnostic and therapeutic procedures. Advances in machine learning (ML) aim to increase diagnostic efficiency by replacing a single application with generalized algorithms. The goal of unsupervised anomaly detection (UAD) is to identify potential anomalous regions unseen during training, where convolutional neural network (CNN) based autoencoders (AEs) and variational autoencoders (VAEs) are considered a de facto approach for reconstruction based-anomaly segmentation. The restricted receptive field in CNNs limits the CNN to model the global context. Hence, if the anomalous regions cover large parts of the image, the CNN-based AEs are not capable of bringing a semantic understanding of the image. Meanwhile, vision transformers (ViTs) have emerged as a competitive alternative to CNNs. It relies on the self-attention mechanism that can relate image patches to each other. We investigate in this paper Transformer capabilities in building AEs for the reconstruction-based UAD task to reconstruct a coherent and more realistic image. We focus on anomaly segmentation for brain magnetic resonance imaging (MRI) and present five Transformer-based models while enabling segmentation performance comparable to or superior to state-of-the-art (SOTA) models. The source code is made publicly available on GitHub: https://github.com/ahmedgh970/Transformers_Unsupervised_Anomaly_Segmentation.git.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods