Search Results for author: Valerio Biscione

Found 8 papers, 2 papers with code

Convolutional Neural Networks Trained to Identify Words Provide a Surprisingly Good Account of Visual Form Priming Effects

no code implementations8 Feb 2023 Dong Yin, Valerio Biscione, Jeffrey Bowers

A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings.

Object Recognition

Mixed Evidence for Gestalt Grouping in Deep Neural Networks

1 code implementation14 Mar 2022 Valerio Biscione, Jeffrey S. Bowers

Here we test a total of 16 networks covering a variety of architectures and learning paradigms (convolutional, attention-based, supervised and self-supervised, feed-forward and recurrent) on dots (Experiment 1) and more complex shapes (Experiment 2) stimuli that produce strong Gestalts effects in humans.

Object Recognition

Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be

no code implementations12 Oct 2021 Valerio Biscione, Jeffrey S. Bowers

When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation.

Translation

Learning Online Visual Invariances for Novel Objects via Supervised and Self-Supervised Training

no code implementations4 Oct 2021 Valerio Biscione, Jeffrey S. Bowers

Through the analysis of models' internal representations, we show that standard supervised CNNs trained on transformed objects can acquire strong invariances on novel classes even when trained with as few as 50 objects taken from 10 classes.

Data Augmentation Translation

A case for robust translation tolerance in humans and CNNs. A commentary on Han et al

no code implementations10 Dec 2020 Ryan Blything, Valerio Biscione, Jeffrey Bowers

Han et al. (2020) reported a behavioral experiment that assessed the extent to which the human visual system can identify novel images at unseen retinal locations (what the authors call "intrinsic translation invariance") and developed a novel convolutional neural network model (an Eccentricity Dependent Network or ENN) to capture key aspects of the behavioral results.

Translation

Learning Translation Invariance in CNNs

no code implementations NeurIPS Workshop SVRHM 2020 Valerio Biscione, Jeffrey Bowers

In this work we show how, even though CNNs are not 'architecturally invariant' to translation, they can indeed 'learn' to be invariant to translation.

Translation

The human visual system and CNNs can both support robust online translation tolerance following extreme displacements

no code implementations27 Sep 2020 Ryan Blything, Valerio Biscione, Ivan I. Vankov, Casimir J. H. Ludwig, Jeffrey S. Bowers

Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10{\deg} and other reporting zero invariance at 4{\deg} of visual angle.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.