MFQE 2.0: A New Approach for Multi-frame Quality Enhancement on Compressed Video

26 Feb 2019  ·  Qunliang Xing, Zhenyu Guan, Mai Xu, Ren Yang, Tie Liu, Zulin Wang ·

The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, not considering the similarity between consecutive frames. Since heavy fluctuation exists across compressed video frames as investigated in this paper, frame similarity can be utilized for quality enhancement of low-quality frames given their neighboring high-quality frames. This task is Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as the first attempt in this direction. In our approach, we firstly develop a Bidirectional Long Short-Term Memory (BiLSTM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are the input. In MF-CNN, motion between the non-PQF and PQFs is compensated by a motion compensation subnet. Subsequently, a quality enhancement subnet fuses the non-PQF and compensated PQFs, and then reduces the compression artifacts of the non-PQF. Also, PQF quality is enhanced in the same way. Finally, experiments validate the effectiveness and generalization ability of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video. The code is available at https://github.com/RyanXingQL/MFQEv2.0.git.

PDF Abstract

Datasets


Introduced in the Paper:

MFQE v2

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Enhancement MFQE v2 MFQE 2.0 Incremental PSNR 0.56 # 5
Parameters(M) 0.25 # 1

Methods


No methods listed for this paper. Add relevant methods here