2 code implementations • 8 Apr 2024 • Valerio Biscione, Dong Yin, Gaurav Malhotra, Marin Dujmovic, Milton L. Montero, Guillermo Puebla, Federico Adolfi, Rachel F. Heaton, John E. Hummel, Benjamin D. Evans, Karim Habashy, Jeffrey S. Bowers
Multiple benchmarks have been developed to assess the alignment between deep neural networks (DNNs) and human vision.
no code implementations • 8 Feb 2023 • Dong Yin, Valerio Biscione, Jeffrey Bowers
A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings.
1 code implementation • 14 Mar 2022 • Valerio Biscione, Jeffrey S. Bowers
Here we test a total of 16 networks covering a variety of architectures and learning paradigms (convolutional, attention-based, supervised and self-supervised, feed-forward and recurrent) on dots (Experiment 1) and more complex shapes (Experiment 2) stimuli that produce strong Gestalts effects in humans.
no code implementations • 12 Oct 2021 • Valerio Biscione, Jeffrey S. Bowers
When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation.
no code implementations • 4 Oct 2021 • Valerio Biscione, Jeffrey S. Bowers
Through the analysis of models' internal representations, we show that standard supervised CNNs trained on transformed objects can acquire strong invariances on novel classes even when trained with as few as 50 objects taken from 10 classes.
no code implementations • 10 Dec 2020 • Ryan Blything, Valerio Biscione, Jeffrey Bowers
Han et al. (2020) reported a behavioral experiment that assessed the extent to which the human visual system can identify novel images at unseen retinal locations (what the authors call "intrinsic translation invariance") and developed a novel convolutional neural network model (an Eccentricity Dependent Network or ENN) to capture key aspects of the behavioral results.
no code implementations • NeurIPS Workshop SVRHM 2020 • Valerio Biscione, Jeffrey Bowers
In this work we show how, even though CNNs are not 'architecturally invariant' to translation, they can indeed 'learn' to be invariant to translation.
no code implementations • 27 Sep 2020 • Ryan Blything, Valerio Biscione, Ivan I. Vankov, Casimir J. H. Ludwig, Jeffrey S. Bowers
Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10{\deg} and other reporting zero invariance at 4{\deg} of visual angle.