no code implementations • 17 Mar 2022 • Charlie Nash, João Carreira, Jacob Walker, Iain Barr, Andrew Jaegle, Mateusz Malinowski, Peter Battaglia
We present a general-purpose framework for image modelling and vision tasks based on probabilistic frame prediction.
3 code implementations • 15 Feb 2022 • Curtis Hawthorne, Andrew Jaegle, Cătălina Cangea, Sebastian Borgeaud, Charlie Nash, Mateusz Malinowski, Sander Dieleman, Oriol Vinyals, Matthew Botvinick, Ian Simon, Hannah Sheahan, Neil Zeghidour, Jean-Baptiste Alayrac, João Carreira, Jesse Engel
Real-world data is high-dimensional: a book, image, or musical performance can easily contain hundreds of thousands of elements even after compression.
Ranked #35 on Language Modelling on WikiText-103
no code implementations • CVPR 2021 • Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov
To address this issue, we introduce a new challenging task to generate HD maps.
no code implementations • 10 Mar 2021 • Sander Dieleman, Charlie Nash, Jesse Engel, Karen Simonyan
Semantically meaningful information content in perceptual signals is usually unevenly distributed.
1 code implementation • 5 Mar 2021 • Charlie Nash, Jacob Menick, Sander Dieleman, Peter W. Battaglia
The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models.
1 code implementation • ICML 2020 • Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, Peter W. Battaglia
Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development.
2 code implementations • NeurIPS 2019 • Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L. Hamilton, David Duvenaud, Raquel Urtasun, Richard S. Zemel
Our model generates graphs one block of nodes and associated edges at a time.
1 code implementation • 11 Apr 2019 • Charlie Nash, Conor Durkan
We propose the Autoregressive Energy Machine, an energy-based model which simultaneously learns an unnormalized density and computes an importance-sampling estimate of the normalizing constant for each conditional in an autoregressive decomposition.
no code implementations • 1 Jun 2018 • Charlie Nash, Nate Kushman, Christopher K. I. Williams
In addition, we can use these inversion models to estimate the mutual information between a model's inputs and its intermediate representations, thus quantifying the amount of information preserved by the network at different stages.
1 code implementation • 11 Jan 2018 • Christopher K. I. Williams, Charlie Nash, Alfredo Nazábal
We show how to calculate exactly the latent posterior distribution for the factor analysis (FA) model in the presence of missing data, and note that this solution implies that a different encoder network is required for each pattern of missingness.
no code implementations • ICLR 2018 • Charlie Nash, Sebastian Nowozin, Nate Kushman
Using the Shapeworld dataset, we show that our representation both enables a better generative model of images, leading to higher quality image samples, as well as creating more semantically useful representations that improve performance over purely dicriminative models on a simple natural language yes/no question answering task.