Search Results for author: Francesco Alemanno

Found 10 papers, 0 papers with code

Regularization, early-stopping and dreaming: a Hopfield-like setup to address generalization and overfitting

no code implementations1 Aug 2023 Elena Agliari, Francesco Alemanno, Miriam Aquaro, Alberto Fachechi

In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function.

Hopfield model with planted patterns: a teacher-student self-supervised learning model

no code implementations26 Apr 2023 Francesco Alemanno, Luca Camanzi, Gianluca Manzan, Daniele Tantari

While Hopfield networks are known as paradigmatic models for memory storage and retrieval, modern artificial intelligence systems mainly stand on the machine learning paradigm.

Memorization Retrieval +1

Dense Hebbian neural networks: a replica symmetric picture of supervised learning

no code implementations25 Nov 2022 Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained by a teacher (i. e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.

Retrieval valid

Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning

no code implementations25 Nov 2022 Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.

valid

Recurrent neural networks that generalize from examples and optimize by dreaming

no code implementations17 Apr 2022 Miriam Aquaro, Francesco Alemanno, Ido Kanter, Fabrizio Durante, Elena Agliari, Adriano Barra

The gap between the huge volumes of data needed to train artificial neural networks and the relatively small amount of data needed by their biological counterparts is a central puzzle in machine learning.

Supervised Hebbian Learning

no code implementations2 Mar 2022 Francesco Alemanno, Miriam Aquaro, Ido Kanter, Adriano Barra, Elena Agliari

In neural network's Literature, Hebbian learning traditionally refers to the procedure by which the Hopfield model and its generalizations store archetypes (i. e., definite patterns that are experienced just once to form the synaptic matrix).

Disentanglement

The emergence of a concept in shallow neural networks

no code implementations1 Sep 2021 Elena Agliari, Francesco Alemanno, Adriano Barra, Giordano De Marzo

We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable ``archetypes'' and we show that there exists a critical sample size beyond which the RBM can learn archetypes, namely the machine can successfully play as a generative model or as a classifier, according to the operational routine.

Interpolating between boolean and extremely high noisy patterns through Minimal Dense Associative Memories

no code implementations2 Dec 2019 Francesco Alemanno, Martino Centonze, Alberto Fachechi

Recently, Hopfield and Krotov introduced the concept of {\em dense associative memories} [DAM] (close to spin-glasses with $P$-wise interactions in a disordered statistical mechanical jargon): they proved a number of remarkable features these networks share and suggested their use to (partially) explain the success of the new generation of Artificial Intelligence.

Neural networks with redundant representation: detecting the undetectable

no code implementations28 Nov 2019 Elena Agliari, Francesco Alemanno, Adriano Barra, Martino Centonze, Alberto Fachechi

We consider a three-layer Sejnowski machine and show that features learnt via contrastive divergence have a dual representation as patterns in a dense associative memory of order P=4.

Dreaming neural networks: rigorous results

no code implementations21 Dec 2018 Elena Agliari, Francesco Alemanno, Adriano Barra, Alberto Fachechi

Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model).

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.