Search Results for author: Christopher P. Burgess

Found 11 papers, 8 papers with code

Linking vision and motion for self-supervised object-centric perception

1 code implementation14 Jul 2023 Kaylene C. Stocking, Zak Murez, Vijay Badrinarayanan, Jamie Shotton, Alex Kendall, Claire Tomlin, Christopher P. Burgess

Object-centric representations enable autonomous driving algorithms to reason about interactions between many independent agents and scene features.

Autonomous Driving Object

SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition

1 code implementation NeurIPS 2021 Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matthew Botvinick, Alexander Lerchner, Christopher P. Burgess

Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint.

Instance Segmentation Object +1

Unsupervised Model Selection for Variational Disentangled Representation Learning

no code implementations ICLR 2020 Sunny Duan, Loic Matthey, Andre Saraiva, Nicholas Watters, Christopher P. Burgess, Alexander Lerchner, Irina Higgins

Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks.

Attribute Disentanglement +2

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

2 code implementations21 Jan 2019 Nicholas Watters, Loic Matthey, Christopher P. Burgess, Alexander Lerchner

We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.

Neural Rendering

Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

1 code implementation NeurIPS 2018 Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner, Irina Higgins

Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge.

Representation Learning

Understanding disentangling in $β$-VAE

23 code implementations10 Apr 2018 Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, Alexander Lerchner

We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders.

SCAN: Learning Hierarchical Compositional Visual Concepts

no code implementations ICLR 2018 Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner

SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.