Search Results for author: Julie Grollier

Found 12 papers, 5 papers with code

Unsupervised End-to-End Training with a Self-Defined Bio-Inspired Target

no code implementations18 Mar 2024 Dongshu Liu, Jérémie Laydevant, Adrien Pontlevy, Damien Querlioz, Julie Grollier

Current unsupervised learning methods depend on end-to-end training via deep learning techniques such as self-supervised learning, with high computational requirements, or employ layer-by-layer training using bio-inspired approaches like Hebbian learning, using local learning rules incompatible with supervised learning.

Self-Supervised Learning

Training an Ising Machine with Equilibrium Propagation

1 code implementation22 May 2023 Jérémie Laydevant, Danijela Markovic, Julie Grollier

Ising machines, which are hardware implementations of the Ising model of coupled spins, have been influential in the development of unsupervised learning algorithms at the origins of Artificial Intelligence (AI).

Classification of multi-frequency RF signals by extreme learning, using magnetic tunnel junctions as neurons and synapses

no code implementations2 Nov 2022 Nathan Leroux, Danijela Marković, Dédalo Sanz-Hernández, Juan Trastoy, Paolo Bortolotti, Alejandro Schulman, Luana Benetti, Alex Jenkins, Ricardo Ferreira, Julie Grollier, Alice Mizrahi

Extracting information from radiofrequency (RF) signals using artificial neural networks at low energy cost is a critical need for a wide range of applications from radars to health.

Forecasting the outcome of spintronic experiments with Neural Ordinary Differential Equations

1 code implementation23 Jul 2021 Xing Chen, Flavio Abreu Araujo, Mathieu Riou, Jacob Torrejon, Dafiné Ravelosona, Wang Kang, Weisheng Zhao, Julie Grollier, Damien Querlioz

Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them.

Training Dynamical Binary Neural Networks with Equilibrium Propagation

1 code implementation CVPR Workshop Binary Vision 2021 Jérémie Laydevant, Maxence Ernoult, Damien Querlioz, Julie Grollier

We first train systems with binary weights and full-precision activations, achieving an accuracy equivalent to that of full-precision models trained by standard EP on MNIST, and losing only 1. 9% accuracy on CIFAR-10 with equal architecture.

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

no code implementations14 Jan 2021 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning.

EqSpike: Spike-driven Equilibrium Propagation for Neuromorphic Implementations

no code implementations15 Oct 2020 Erwann Martin, Maxence Ernoult, Jérémie Laydevant, Shuai Li, Damien Querlioz, Teodora Petrisor, Julie Grollier

Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge.

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

1 code implementation6 Jun 2020 Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz

In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP.

Equilibrium Propagation with Continual Weight Updates

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Continual Weight Updates and Convolutional Architectures for Equilibrium Propagation

no code implementations29 Apr 2020 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.

Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

2 code implementations NeurIPS 2019 Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i. e. RNNs that are fed by a static input x and settle to a steady state.

Cannot find the paper you are looking for? You can Submit a new open access paper.