|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
At each stage, we introduce a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features.
Ranked #1 on Image Denoising on SIDD
Removal of rain streaks from a single image is an extremely challenging problem since the rainy images often contain rain streaks of different size, shape, direction and density.
To fill this gap, in this paper, we regard the single-image deraining as a general image-enhancing problem and originally propose a model-free deraining method, i. e., EfficientDeRain, which is able to process a rainy image within 10~ms (i. e., around 6~ms on average), over 80 times faster than the state-of-the-art method (i. e., RCDNet), while achieving similar de-rain effects.
In the field of multimedia, single image deraining is a basic pre-processing work, which can greatly improve the visual effect of subsequent high-level tasks in rainy conditions.
In addition, to further enhance the performance of the method for haze removal, a patch-map-based DCP has been embedded into the network, and this module has been trained with the atmospheric light generator, patch map selection module, and refined module simultaneously.
Specifically, based on the convolutional dictionary learning mechanism for representing rain, we propose a novel single image deraining model and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model.
In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features in a unified framework, termed multi-scale progressive fusion network (MSPFN) for single image rain streak removal.
Ranked #2 on Single Image Deraining on Test100
Previous approaches have attempted to address this problem by leveraging some prior information to remove rain streaks from a single image.
Ranked #4 on Single Image Deraining on Test1200
Conventional patch-based haze removal algorithms (e. g. the Dark Channel prior) usually performs dehazing with a fixed patch size.
First, we propose a semi-automatic method that incorporates temporal priors and human supervision to generate a high-quality clean image from each input sequence of real rain images.