Search Results for author: Irina Higgins

Found 21 papers, 12 papers with code

Solving math word problems with process- and outcome-based feedback

no code implementations25 Nov 2022 Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, Irina Higgins

Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks.

Ranked #30 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +1

Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

no code implementations19 May 2022 Antonia Creswell, Murray Shanahan, Irina Higgins

Large language models (LLMs) have been shown to be capable of impressive few-shot generalisation to new tasks.

Logical Reasoning

Symmetry-Based Representations for Artificial and Biological General Intelligence

no code implementations17 Mar 2022 Irina Higgins, Sébastien Racanière, Danilo Rezende

In this review article we are going to argue that symmetry transformations are a fundamental principle that can guide our search for what makes a good representation.

Representation Learning

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

2 code implementations NA 2021 Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent SIfre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, Geoffrey Irving

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Abstract Algebra Anachronisms +133

SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision

1 code implementation NeurIPS 2021 Irina Higgins, Peter Wirnsberger, Andrew Jaegle, Aleksandar Botev

Using SyMetric, we identify a set of architectural choices that significantly improve the performance of a previously proposed model for inferring latent dynamics from pixels, the Hamiltonian Generative Network (HGN).

Autonomous Driving Image Reconstruction

Which priors matter? Benchmarking models for learning latent dynamics

2 code implementations9 Nov 2021 Aleksandar Botev, Andrew Jaegle, Peter Wirnsberger, Daniel Hennes, Irina Higgins

Learning dynamics is at the heart of many important applications of machine learning (ML), such as robotics and autonomous driving.

Autonomous Driving Benchmarking

Disentangling by Subspace Diffusion

1 code implementation NeurIPS 2020 David Pfau, Irina Higgins, Aleksandar Botev, Sébastien Racanière

We present a novel nonparametric algorithm for symmetry-based disentangling of data manifolds, the Geometric Manifold Component Estimator (GEOMANCER).

Metric Learning Representation Learning

Disentangled Cumulants Help Successor Representations Transfer to New Tasks

no code implementations25 Nov 2019 Christopher Grimm, Irina Higgins, Andre Barreto, Denis Teplyashin, Markus Wulfmeier, Tim Hertweck, Raia Hadsell, Satinder Singh

This is in contrast to the state-of-the-art reinforcement learning agents, which typically start learning each new task from scratch and struggle with knowledge transfer.

Transfer Learning

Equivariant Hamiltonian Flows

no code implementations30 Sep 2019 Danilo Jimenez Rezende, Sébastien Racanière, Irina Higgins, Peter Toth

This paper introduces equivariant hamiltonian flows, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while providing an equivariant representation of the data.

Representation Learning

Unsupervised Model Selection for Variational Disentangled Representation Learning

no code implementations ICLR 2020 Sunny Duan, Loic Matthey, Andre Saraiva, Nicholas Watters, Christopher P. Burgess, Alexander Lerchner, Irina Higgins

Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks.

Attribute Disentanglement +2

Towards a Definition of Disentangled Representations

1 code implementation5 Dec 2018 Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, Alexander Lerchner

Here we propose that a principled solution to characterising disentangled representations can be found by focusing on the transformation properties of the world.

Representation Learning

Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

1 code implementation NeurIPS 2018 Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner, Irina Higgins

Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge.

Representation Learning

Understanding disentangling in $β$-VAE

23 code implementations10 Apr 2018 Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, Alexander Lerchner

We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders.

SCAN: Learning Hierarchical Compositional Visual Concepts

no code implementations ICLR 2018 Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner

SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

6 code implementations ICLR 2017 Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.

Disentanglement

Early Visual Concept Learning with Unsupervised Deep Learning

1 code implementation17 Jun 2016 Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research.

Cannot find the paper you are looking for? You can Submit a new open access paper.