no code implementations • 25 Nov 2021 • Fei Yang, Yaxing Wang, Luis Herranz, Yongmei Cheng, Mikhail Mozerov
Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec.
1 code implementation • 11 Dec 2019 • Fei Yang, Luis Herranz, Joost Van de Weijer, José A. Iglesias Guitián, Antonio López, Mikhail Mozerov
Addressing these limitations, we formulate the problem of variable rate-distortion optimization for deep image compression, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific rate-distortion tradeoff via a modulation network.
no code implementations • 30 Aug 2019 • Javad Zolfaghari Bengar, Abel Gonzalez-Garcia, Gabriel Villalonga, Bogdan Raducanu, Hamed H. Aghdam, Mikhail Mozerov, Antonio M. Lopez, Joost Van de Weijer
Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives.