Convolutional Neural Networks

Pansharpening Network

Introduced by Yang et al. in PanNet: A Deep Network Architecture for Pan-Sharpening

We propose a deep network architecture for the pansharpening problem called PanNet. We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation. For spectral preservation, we add up-sampled multispectral images to the network output, which directly propagates the spectral information to the reconstructed image. To preserve the spatial structure, we train our network parameters in the high-pass filtering domain rather than the image domain. We show that the trained network generalizes well to images from different satellites without needing retraining. Experiments show significant improvement over state-of-the-art methods visually and in terms of standard quality metrics.

Source: PanNet: A Deep Network Architecture for Pan-Sharpening

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Pansharpening 3 50.00%
Image Super-Resolution 1 16.67%
satellite image super-resolution 1 16.67%
Super-Resolution 1 16.67%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories