Search Results for author: Adriano Barra

Found 18 papers, 0 papers with code

Hebbian Learning from First Principles

no code implementations13 Jan 2024 Linda Albanese, Adriano Barra, Pierluigi Bianco, Fabrizio Durante, Diego Pallara

Recently, the original storage prescription for the Hopfield model of neural networks -- as well as for its dense generalizations -- has been turned into a genuine Hebbian learning rule by postulating the expression of its Hamiltonian for both the supervised and unsupervised protocols.

Unsupervised and Supervised learning by Dense Associative Memory under replica symmetry breaking

no code implementations15 Dec 2023 Linda Albanese, Andrea Alessandrelli, Alessia Annibale, Adriano Barra

Statistical mechanics of spin glasses is one of the main strands toward a comprehension of information processing by neural networks and learning machines.

Parallel Learning by Multitasking Neural Networks

no code implementations8 Aug 2023 Elena Agliari, Andrea Alessandrelli, Adriano Barra, Federico Ricci-Tersenghi

A modern challenge of Artificial Intelligence is learning multiple patterns at once (i. e. parallel learning).

Statistical Mechanics of Learning via Reverberation in Bidirectional Associative Memories

no code implementations17 Jul 2023 Martino Salomone Centonze, Ido Kanter, Adriano Barra

We study bi-directional associative neural networks that, exposed to noisy examples of an extensive number of random archetypes, learn the latter (with or without the presence of a teacher) when the supplied information is enough: in this setting, learning is heteroassociative -- involving couples of patterns -- and it is achieved by reverberating the information depicted from the examples through the layers of the network.

Dense Hebbian neural networks: a replica symmetric picture of supervised learning

no code implementations25 Nov 2022 Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained by a teacher (i. e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.

Retrieval valid

Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning

no code implementations25 Nov 2022 Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.

valid

Thermodynamics of bidirectional associative memories

no code implementations17 Nov 2022 Adriano Barra, Giovanni Catania, Aurélien Decelle, Beatriz Seoane

Introduced by Kosko in 1988 as a generalization of the Hopfield model to a bipartite structure, the simplest architecture is defined by two layers of neurons, with synaptic connections only between units of different layers: even without internal connections within each layer, information storage and retrieval are still possible through the reverberation of neural activities passing from one layer to another.

Retrieval

Pavlov Learning Machines

no code implementations2 Jul 2022 Elena Agliari, Miriam Aquaro, Adriano Barra, Alberto Fachechi, Chiara Marullo

As well known, Hebb's learning traces its origin in Pavlov's Classical Conditioning, however, while the former has been extensively modelled in the past decades (e. g., by Hopfield model and countless variations on theme), as for the latter modelling has remained largely unaddressed so far; further, a bridge between these two pillars is totally lacking.

Recurrent neural networks that generalize from examples and optimize by dreaming

no code implementations17 Apr 2022 Miriam Aquaro, Francesco Alemanno, Ido Kanter, Fabrizio Durante, Elena Agliari, Adriano Barra

The gap between the huge volumes of data needed to train artificial neural networks and the relatively small amount of data needed by their biological counterparts is a central puzzle in machine learning.

Supervised Hebbian Learning

no code implementations2 Mar 2022 Francesco Alemanno, Miriam Aquaro, Ido Kanter, Adriano Barra, Elena Agliari

In neural network's Literature, Hebbian learning traditionally refers to the procedure by which the Hopfield model and its generalizations store archetypes (i. e., definite patterns that are experienced just once to form the synaptic matrix).

Disentanglement

The emergence of a concept in shallow neural networks

no code implementations1 Sep 2021 Elena Agliari, Francesco Alemanno, Adriano Barra, Giordano De Marzo

We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable ``archetypes'' and we show that there exists a critical sample size beyond which the RBM can learn archetypes, namely the machine can successfully play as a generative model or as a classifier, according to the operational routine.

A statistical-inference approach to reconstruct inter-cellular interactions in cell-migration experiments

no code implementations4 Dec 2019 Elena Agliari, Pablo J. Sáez, Adriano Barra, Matthieu Piel, Pablo Vargas, Michele Castellana

In the first experiment, cell migrate in a wound-healing model: when applied to this experiment, the inference method predicts the existence of cell-cell interactions, correctly mirroring the strong intercellular contacts which are present in the experiment.

Neural networks with redundant representation: detecting the undetectable

no code implementations28 Nov 2019 Elena Agliari, Francesco Alemanno, Adriano Barra, Martino Centonze, Alberto Fachechi

We consider a three-layer Sejnowski machine and show that features learnt via contrastive divergence have a dual representation as patterns in a dense associative memory of order P=4.

Dreaming neural networks: rigorous results

no code implementations21 Dec 2018 Elena Agliari, Francesco Alemanno, Adriano Barra, Alberto Fachechi

Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model).

Retrieval

Dreaming neural networks: forgetting spurious memories and reinforcing pure ones

no code implementations29 Oct 2018 Alberto Fachechi, Elena Agliari, Adriano Barra

The standard Hopfield model for associative neural networks accounts for biological Hebbian learning and acts as the harmonic oscillator for pattern recognition, however its maximal storage capacity is $\alpha \sim 0. 14$, far from the theoretical bound for symmetric networks, i. e. $\alpha =1$.

A relativistic extension of Hopfield neural networks via the mechanical analogy

no code implementations5 Jan 2018 Adriano Barra, Matteo Beccaria, Alberto Fachechi

We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning.

Network Pruning

Phase Diagram of Restricted Boltzmann Machines and Generalised Hopfield Networks with Arbitrary Priors

no code implementations20 Feb 2017 Adriano Barra, Giuseppe Genovese, Peter Sollich, Daniele Tantari

Restricted Boltzmann Machines are described by the Gibbs measure of a bipartite spin glass, which in turn corresponds to the one of a generalised Hopfield network.

Retrieval

Phase transitions in Restricted Boltzmann Machines with generic priors

no code implementations9 Dec 2016 Adriano Barra, Giuseppe Genovese, Peter Sollich, Daniele Tantari

We study Generalised Restricted Boltzmann Machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.