Task-Aware Quantization Network for JPEG Image Compression

ECCV 2020  ·  Jinyoung Choi, Bohyung Han ·

We propose to learn a deep neural network for JPEG image compression, which predicts image-specific optimized quantization tables fully compatible with the standard JPEG encoder and decoder. Moreover, our approach provides the capability to learn task-specific quantization tables in a principled way by adjusting the objective function of the network. The main challenge to realize this idea is that there exist non-differentiable components in the encoder such as run-length encoding and Huffman coding and it is not straightforward to predict the probability distribution of the quantized image representations. We address these issues by learning a differentiable loss function that approximates bitrates using simple network blocks---two MLPs and an LSTM. We evaluate the proposed algorithm using multiple task-specific losses---two for semantic image understanding and another two for conventional image compression---and demonstrate the effectiveness of our approach to the individual tasks.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here