no code implementations • 22 Apr 2024 • Karim G. Habashy, Benjamin D. Evans, Dan F. M. Goodman, Jeffrey S. Bowers
Evolution has yielded a diverse set of neurons with varying morphologies and physiological properties that impact their processing of temporal information.
2 code implementations • 8 Apr 2024 • Valerio Biscione, Dong Yin, Gaurav Malhotra, Marin Dujmovic, Milton L. Montero, Guillermo Puebla, Federico Adolfi, Rachel F. Heaton, John E. Hummel, Benjamin D. Evans, Karim Habashy, Jeffrey S. Bowers
Multiple benchmarks have been developed to assess the alignment between deep neural networks (DNNs) and human vision.
1 code implementation • 20 Feb 2024 • Guillermo Puebla, Jeffrey S. Bowers
To this end, these models use several kinds of attention mechanisms to segregate the individual objects in a scene from the background and from other objects.
no code implementations • 14 Apr 2023 • Guillermo Puebla, Jeffrey S. Bowers
Visual reasoning is a long-term goal of vision research.
no code implementations • 6 Apr 2022 • Federico Adolfi, Jeffrey S. Bowers, David Poeppel
The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would potentially enrich artificial hearing systems and process models of the mind and brain.
no code implementations • 5 Apr 2022 • Milton L. Montero, Jeffrey S. Bowers, Rui Ponte Costa, Casimir J. H. Ludwig, Gaurav Malhotra
Recent research has shown that generative models with highly disentangled representations fail to generalise to unseen combination of generative factor values.
1 code implementation • 14 Mar 2022 • Valerio Biscione, Jeffrey S. Bowers
Here we test a total of 16 networks covering a variety of architectures and learning paradigms (convolutional, attention-based, supervised and self-supervised, feed-forward and recurrent) on dots (Experiment 1) and more complex shapes (Experiment 2) stimuli that produce strong Gestalts effects in humans.
no code implementations • 12 Oct 2021 • Valerio Biscione, Jeffrey S. Bowers
When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation.
no code implementations • 4 Oct 2021 • Valerio Biscione, Jeffrey S. Bowers
Through the analysis of models' internal representations, we show that standard supervised CNNs trained on transformed objects can acquire strong invariances on novel classes even when trained with as few as 50 objects taken from 10 classes.
no code implementations • 4 Jul 2021 • Jeff Mitchell, Jeffrey S. Bowers
That shared features between train and test data are required for generalisation in artificial neural networks has been a common assumption of both proponents and critics of these models.
no code implementations • 27 Sep 2020 • Ryan Blything, Valerio Biscione, Ivan I. Vankov, Casimir J. H. Ludwig, Jeffrey S. Bowers
Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10{\deg} and other reporting zero invariance at 4{\deg} of visual angle.
1 code implementation • 2 Jul 2020 • Ella M. Gale, Nicholas Martin, Ryan Blything, Anh Nguyen, Jeffrey S. Bowers
We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates.
no code implementations • ICLR 2018 • Ella M. Gale, Nicolas Martin, Jeffrey S. Bowers
We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data.