Search Results for author: Peter E. Latham

Found 7 papers, 4 papers with code

A Theory of Unimodal Bias in Multimodal Learning

no code implementations1 Dec 2023 Yedi Zhang, Peter E. Latham, Andrew Saxe

A long unimodal phase can lead to a generalization deficit and permanent unimodal bias in the overparametrized regime.

Actionable Neural Representations: Grid Cells from Minimal Constraints

1 code implementation30 Sep 2022 William Dorrell, Peter E. Latham, Timothy E. J. Behrens, James C. R. Whittington

We suggest the brain must represent this consistent meaning of actions across space, as it allows you to find new short-cuts and navigate in unfamiliar settings.

Navigate

Powerpropagation: A sparsity inducing weight reparameterisation

2 code implementations NeurIPS 2021 Jonathan Schwarz, Siddhant M. Jayakumar, Razvan Pascanu, Peter E. Latham, Yee Whye Teh

The training of sparse neural networks is becoming an increasingly important tool for reducing the computational footprint of models at training and evaluation, as well enabling the effective scaling up of models.

Towards Biologically Plausible Convolutional Networks

1 code implementation NeurIPS 2021 Roman Pogodin, Yash Mehta, Timothy P. Lillicrap, Peter E. Latham

This requires the network to pause occasionally for a sleep-like phase of "weight sharing".

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

1 code implementation NeurIPS 2020 Roman Pogodin, Peter E. Latham

The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points.

Image Classification

Synaptic plasticity as Bayesian inference

no code implementations4 Oct 2014 Laurence Aitchison, Jannes Jegminat, Jorge Aurelio Menendez, Jean-Pascal Pfister, Alex Pouget, Peter E. Latham

They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates.

Bayesian Inference

How biased are maximum entropy models?

no code implementations NeurIPS 2011 Jakob H. Macke, Iain Murray, Peter E. Latham

However, maximum entropy models fit to small data sets can be subject to sampling bias; i. e. the true entropy of the data can be severely underestimated.

Small Data Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.