Progressive Deep Video Dehazing without Explicit Alignment Estimation

16 Jul 2021  ·  Runde Li ·

To solve the issue of video dehazing, there are two main tasks to attain: how to align adjacent frames to the reference frame; how to restore the reference frame. Some papers adopt explicit approaches (e.g., the Markov random field, optical flow, deformable convolution, 3D convolution) to align neighboring frames with the reference frame in feature space or image space, they then use various restoration methods to achieve the final dehazing results. In this paper, we propose a progressive alignment and restoration method for video dehazing. The alignment process aligns consecutive neighboring frames stage by stage without using the optical flow estimation. The restoration process is not only implemented under the alignment process but also uses a refinement network to improve the dehazing performance of the whole network. The proposed networks include four fusion networks and one refinement network. To decrease the parameters of networks, three fusion networks in the first fusion stage share the same parameters. Extensive experiments demonstrate that the proposed video dehazing method achieves outstanding performance against the-state-of-art methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here