no code implementations • 18 Jan 2024 • Ioana Bica, Anastasija Ilić, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrović
We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs.
no code implementations • 5 Dec 2023 • Hyunjik Kim, Matthias Bauer, Lucas Theis, Jonathan Richard Schwarz, Emilien Dupont
On the UVG video benchmark, we match the RD performance of the Video Compression Transformer (Mentzer et al.), a well-established neural video codec, with less than 5k MACs/pixel for decoding.
no code implementations • 6 Feb 2023 • Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim
Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities.
no code implementations • 7 Mar 2022 • Aleksander Botev, Matthias Bauer, Soham De
Data augmentation is used in machine learning to make the classifier invariant to label-preserving transformations.
3 code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
no code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
1 code implementation • 11 Apr 2021 • Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan
Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.
no code implementations • 26 Jan 2021 • Matthias Bauer, andriy mnih
Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders.
no code implementations • pproximateinference AABI Symposium 2021 • Matthias Bauer, andriy mnih
Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders.
1 code implementation • 19 Aug 2020 • Alexander Immer, Maciej Korzepa, Matthias Bauer
The generalized Gauss-Newton (GGN) approximation is often used to make practical Bayesian deep learning approaches scalable by replacing a second order derivative with a product of first order derivatives.
1 code implementation • 5 Jun 2019 • Frederik Harder, Matthias Bauer, Mijung Park
Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.
no code implementations • 26 Oct 2018 • Matthias Bauer, andriy mnih
We propose Learned Accept/Reject Sampling (LARS), a method for constructing richer priors using rejection sampling with a learned acceptance function.
no code implementations • NeurIPS 2018 • Mark van der Wilk, Matthias Bauer, ST John, James Hensman
Generalising well in supervised learning tasks relies on correctly extrapolating the training data to a large region of the input space.
1 code implementation • ICLR 2019 • Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E. Turner
2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass.
1 code implementation • 4 May 2018 • Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf
The modulation transfer function (MTF) is widely used to characterise the performance of optical systems.
no code implementations • ICLR 2018 • Matthias Bauer, Mateo Rojas-Carulla, Jakub Bartłomiej Świątkowski, Bernhard Schölkopf, Richard E. Turner
The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples.
no code implementations • NeurIPS 2016 • Matthias Bauer, Mark van der Wilk, Carl Edward Rasmussen
Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets.