Adversarial Distortion Learning for Medical Image Denoising

29 Apr 2022  ยท  Morteza Ghahremani, Mohammad Khateri, Alejandra Sierra, Jussi Tohka ยท

We present a novel adversarial distortion learning (ADL) for denoising two- and three-dimensional (2D/3D) biomedical image data. The proposed ADL consists of two auto-encoders: a denoiser and a discriminator. The denoiser removes noise from input data and the discriminator compares the denoised result to its noise-free counterpart. This process is repeated until the discriminator cannot differentiate the denoised data from the reference. Both the denoiser and the discriminator are built upon a proposed auto-encoder called Efficient-Unet. Efficient-Unet has a light architecture that uses the residual blocks and a novel pyramidal approach in the backbone to efficiently extract and re-use feature maps. During training, the textural information and contrast are controlled by two novel loss functions. The architecture of Efficient-Unet allows generalizing the proposed method to any sort of biomedical data. The 2D version of our network was trained on ImageNet and tested on biomedical datasets whose distribution is completely different from ImageNet; so, there is no need for re-training. Experimental results carried out on magnetic resonance imaging (MRI), dermatoscopy, electron microscopy and X-ray datasets show that the proposed method achieved the best on each benchmark. Our implementation and pre-trained models are available at https://github.com/mogvision/ADL.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Grayscale Image Denoising BSD68 sigma15 ADL PSNR 32.11 # 1
Grayscale Image Denoising BSD68 sigma25 ADL PSNR 29.50 # 2
Grayscale Image Denoising BSD68 sigma50 ADL PSNR 26.87 # 1
Color Image Denoising CBSD68 sigma15 ADL PSNR 34.61 # 1
Color Image Denoising CBSD68 sigma25 ADL PSNR 31.78 # 2
Color Image Denoising CBSD68 sigma35 ADL PSNR 30.24 # 1
Color Image Denoising CBSD68 sigma50 ADL PSNR 29.02 # 2

Methods


No methods listed for this paper. Add relevant methods here