Paper

A Coding Framework and Benchmark towards Compressed Video Understanding

Most video understanding methods are learned on high-quality videos. However, in real-world scenarios, the videos are first compressed before the transportation and then decompressed for understanding. The decompressed videos may have lost the critical information to the downstream tasks. To address this issue, we propose the first coding framework for compressed video understanding, where another learnable analytic bitstream is simultaneously transported with the original video bitstream. With the dedicatedly designed self-supervised optimization target and dynamic network architectures, this new stream largely boosts the downstream tasks yet with a small bit cost. By only one-time training, our framework can be deployed for multiple downstream tasks. Our framework also enjoys the best of both two worlds, (1) high efficiency of industrial video codec and (2) flexible coding capability of neural networks (NNs). Finally, we build a rigorous benchmark for compressed video understanding on three popular tasks over seven large-scale datasets and four different compression levels. The proposed Understanding oriented Video Coding framework UVC consistently demonstrates significantly stronger performances than the baseline industrial codec.

Results in Papers With Code
(↓ scroll down to see all results)