Search Results for author: Jeffrey S. Bowers

Found 13 papers, 4 papers with code

Adapting to time: why nature evolved a diverse set of neurons

no code implementations22 Apr 2024 Karim G. Habashy, Benjamin D. Evans, Dan F. M. Goodman, Jeffrey S. Bowers

Evolution has yielded a diverse set of neurons with varying morphologies and physiological properties that impact their processing of temporal information.

Visual Reasoning in Object-Centric Deep Neural Networks: A Comparative Cognition Approach

1 code implementation20 Feb 2024 Guillermo Puebla, Jeffrey S. Bowers

To this end, these models use several kinds of attention mechanisms to segregate the individual objects in a scene from the background and from other objects.

Object Relational Reasoning +2

Successes and critical failures of neural networks in capturing human-like speech recognition

no code implementations6 Apr 2022 Federico Adolfi, Jeffrey S. Bowers, David Poeppel

The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would potentially enrich artificial hearing systems and process models of the mind and brain.

speech-recognition Speech Recognition

Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

no code implementations5 Apr 2022 Milton L. Montero, Jeffrey S. Bowers, Rui Ponte Costa, Casimir J. H. Ludwig, Gaurav Malhotra

Recent research has shown that generative models with highly disentangled representations fail to generalise to unseen combination of generative factor values.

Mixed Evidence for Gestalt Grouping in Deep Neural Networks

1 code implementation14 Mar 2022 Valerio Biscione, Jeffrey S. Bowers

Here we test a total of 16 networks covering a variety of architectures and learning paradigms (convolutional, attention-based, supervised and self-supervised, feed-forward and recurrent) on dots (Experiment 1) and more complex shapes (Experiment 2) stimuli that produce strong Gestalts effects in humans.

Object Recognition

Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be

no code implementations12 Oct 2021 Valerio Biscione, Jeffrey S. Bowers

When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation.

Translation

Learning Online Visual Invariances for Novel Objects via Supervised and Self-Supervised Training

no code implementations4 Oct 2021 Valerio Biscione, Jeffrey S. Bowers

Through the analysis of models' internal representations, we show that standard supervised CNNs trained on transformed objects can acquire strong invariances on novel classes even when trained with as few as 50 objects taken from 10 classes.

Data Augmentation Translation

Generalisation in Neural Networks Does not Require Feature Overlap

no code implementations4 Jul 2021 Jeff Mitchell, Jeffrey S. Bowers

That shared features between train and test data are required for generalisation in artificial neural networks has been a common assumption of both proponents and critics of these models.

The human visual system and CNNs can both support robust online translation tolerance following extreme displacements

no code implementations27 Sep 2020 Ryan Blything, Valerio Biscione, Ivan I. Vankov, Casimir J. H. Ludwig, Jeffrey S. Bowers

Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10{\deg} and other reporting zero invariance at 4{\deg} of visual angle.

Translation

Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?

1 code implementation2 Jul 2020 Ella M. Gale, Nicholas Martin, Ryan Blything, Anh Nguyen, Jeffrey S. Bowers

We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates.

General Classification Image Classification +1

When and where do feed-forward neural networks learn localist representations?

no code implementations ICLR 2018 Ella M. Gale, Nicolas Martin, Jeffrey S. Bowers

We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data.

Cannot find the paper you are looking for? You can Submit a new open access paper.