Joint network for specular highlight detection and adversarial generation of specular-free images trained with polarimetric data

Specular highlights in images pose a significant challenge in algorithms for image segmentation, object detection and other image-based decision-making systems. However, most systems ignore this particular scenario and neglect input images with specular highlights instead of mitigating it in the pre-processing stage. In this paper, we leverage deep neural networks and take advantage of the varying illumination information in polarimetric images for synthesizing specular-free images. We propose a multi-domain Specular Highlight Mitigation Generative Adversarial Network (SHMGAN) with self-attention. SHMGAN consists of a single generator–discriminator pair trained simultaneously using polarimetric images. The proposed GAN uses a dynamically generated attention mask based on a specularity segmentation network without requiring additional manual input. The network is able to learn the illumination variation between the four polarimetric images and a pseudo-diffuse image. Once trained, SHMGAN is able to generate specular-free images from a single RGB image as input; without requiring any additional external labels. The proposed network is trained and tested publicly available datasets of real-world images. SHMGAN is able to accurately identify the specularity affected pixels and generates high visual quality images with mitigated specular reflections. The generated images are realistic and have very low noise, distortions and aberrations compared to the existing state-of-the-art methods for specular highlight removal.

PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here