Paper

Progressive Deep Video Dehazing without Explicit Alignment Estimation

To solve the issue of video dehazing, there are two main tasks to attain: how to align adjacent frames to the reference frame; how to restore the reference frame. Some papers adopt explicit approaches (e.g., the Markov random field, optical flow, deformable convolution, 3D convolution) to align neighboring frames with the reference frame in feature space or image space, they then use various restoration methods to achieve the final dehazing results. In this paper, we propose a progressive alignment and restoration method for video dehazing. The alignment process aligns consecutive neighboring frames stage by stage without using the optical flow estimation. The restoration process is not only implemented under the alignment process but also uses a refinement network to improve the dehazing performance of the whole network. The proposed networks include four fusion networks and one refinement network. To decrease the parameters of networks, three fusion networks in the first fusion stage share the same parameters. Extensive experiments demonstrate that the proposed video dehazing method achieves outstanding performance against the-state-of-art methods.

Results in Papers With Code
(↓ scroll down to see all results)