no code implementations • 30 Oct 2023 • Szilvia Ujváry, Gergely Flamich, Vincent Fortuin, José Miguel Hernández Lobato
An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound.
1 code implementation • 29 Sep 2023 • Jiajun He, Gergely Flamich, Zongyu Guo, José Miguel Hernández-Lobato
COMpression with Bayesian Implicit NEural Representations (COMBINER) is a recent data compression method that addresses a key inefficiency of previous Implicit Neural Representation (INR)-based approaches: it avoids quantization and enables direct optimization of the rate-distortion performance.
no code implementations • 15 Jul 2023 • Jihao Andreas Lin, Gergely Flamich, José Miguel Hernández-Lobato
To achieve the desired compression rate, $D_{\mathrm{KL}}[Q_{\mathbf{w}} \Vert P_{\mathbf{w}}]$ must be constrained, which requires a computationally expensive annealing procedure under the conventional mean-variance (Mean-Var) parameterization for $Q_{\mathbf{w}}$.
1 code implementation • NeurIPS 2023 • Zongyu Guo, Gergely Flamich, Jiajun He, Zhibo Chen, José Miguel Hernández-Lobato
Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image.
1 code implementation • NeurIPS 2020 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Variational Autoencoders (VAEs) have seen widespread use in learned image compression.
no code implementations • 25 Sep 2019 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.