1 code implementation • 14 Jul 2023 • Kaylene C. Stocking, Zak Murez, Vijay Badrinarayanan, Jamie Shotton, Alex Kendall, Claire Tomlin, Christopher P. Burgess
Object-centric representations enable autonomous driving algorithms to reason about interactions between many independent agents and scene features.
no code implementations • 23 Jul 2021 • James C. R. Whittington, Rishabh Kabra, Loic Matthey, Christopher P. Burgess, Alexander Lerchner
Learning structured representations of visual scenes is currently a major bottleneck to bridging perception with reasoning.
1 code implementation • NeurIPS 2021 • Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matthew Botvinick, Alexander Lerchner, Christopher P. Burgess
Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint.
no code implementations • ICLR 2020 • Sunny Duan, Loic Matthey, Andre Saraiva, Nicholas Watters, Christopher P. Burgess, Alexander Lerchner, Irina Higgins
Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks.
1 code implementation • 22 May 2019 • Nicholas Watters, Loic Matthey, Matko Bosnjak, Christopher P. Burgess, Alexander Lerchner
Data efficiency and robustness to task-irrelevant perturbations are long-standing challenges for deep reinforcement learning algorithms.
5 code implementations • 22 Jan 2019 • Christopher P. Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, Alexander Lerchner
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence.
2 code implementations • 21 Jan 2019 • Nicholas Watters, Loic Matthey, Christopher P. Burgess, Alexander Lerchner
We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.
1 code implementation • NeurIPS 2018 • Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner, Irina Higgins
Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge.
23 code implementations • 10 Apr 2018 • Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, Alexander Lerchner
We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders.
1 code implementation • ICML 2017 • Irina Higgins, Arka Pal, Andrei A. Rusu, Loic Matthey, Christopher P. Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, Alexander Lerchner
Domain adaptation is an important open problem in deep reinforcement learning (RL).
no code implementations • ICLR 2018 • Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner
SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.