SAR Image Despeckling Using Continuous Attention Module

journal 2021  ·  Jaekyun Ko, Sanghwan Lee ·

Speckle removal process is inevitable in the restoration of synthetic aperture radar (SAR) images. Several variant methods have been proposed for enhancing SAR images over the past decades. However, in recent studies, convolutional neural networks (CNNs) have been widely applied in SAR image despeckling because of their versatility in representation learning. Nonetheless, a fair number of textures of the images are still lost when despeckling using simple CNN structures. To solve this problem, an encoder–decoder architecture was previously proposed. Although this architecture extracts features on different scales and has been shown to yield state-of-the-art performance, it still learns representation locally, resulting in missing overall information of convolutional features. Therefore, we herein introduce a new method for SAR image despeckling (SAR-CAM), which improves the performance of an encoder–decoder CNN architecture by using various attention modules. Moreover, a context block is introduced at the minimum scale to capture multiscale information. The model is trained via a data-driven approach using the gradient descent algorithm with a combination of modified despeckling gain and total variation loss function. Experiments performed on simulated and real SAR data demonstrate that the proposed method achieves significant improvements over state-of-the-art methodologies.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here