Search Results for author: David Ding

Found 7 papers, 3 papers with code

Know your audience: specializing grounded language models with listener subtraction

no code implementations16 Jun 2022 Aaditya K. Singh, David Ding, Andrew Saxe, Felix Hill, Andrew K. Lampinen

Through controlled experiments, we show that training a speaker with two listeners that perceive differently, using our method, allows the speaker to adapt to the idiosyncracies of the listeners.

Language Modelling Large Language Model

Neural spatio-temporal reasoning with object-centric self-supervised learning

no code implementations1 Jan 2021 David Ding, Felix Hill, Adam Santoro, Matthew Botvinick

Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities.

Language Modelling Self-Supervised Learning

Attention over learned object embeddings enables complex visual reasoning

1 code implementation NeurIPS 2021 David Ding, Felix Hill, Adam Santoro, Malcolm Reynolds, Matt Botvinick

Neural networks have achieved success in a wide array of perceptual tasks but often fail at tasks involving both perception and higher-level reasoning.

Object Video Object Tracking +1

Adapting Behaviour for Learning Progress

no code implementations14 Dec 2019 Tom Schaul, Diana Borsa, David Ding, David Szepesvari, Georg Ostrovski, Will Dabney, Simon Osindero

Determining what experience to generate to best facilitate learning (i. e. exploration) is one of the distinguishing features and open challenges in reinforcement learning.

Atari Games

Deep Lattice Networks and Partial Monotonic Functions

no code implementations NeurIPS 2017 Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya Gupta

We propose learning deep models that are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.