no code implementations • 16 Aug 2020 • Brian Gardner, André Grüning
The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer.
no code implementations • 2 Oct 2019 • Hyeryung Jang, Osvaldo Simeone, Brian Gardner, André Grüning
The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by energy-efficient hardware implementations, which can offer significant energy reductions as compared to conventional artificial neural networks (ANNs).
no code implementations • 10 Dec 2018 • Hyeryung Jang, Osvaldo Simeone, Brian Gardner, André Grüning
This paper aims at providing an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules leveraging the unique time encoding capabilities of SNNs.
no code implementations • 20 Feb 2017 • Joseph Chrol-Cannon, Yaochu Jin, André Grüning
This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation.
no code implementations • 14 Jan 2016 • Brian Gardner, André Grüning
We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision.
no code implementations • 31 Mar 2015 • Brian Gardner, Ioana Sporea, André Grüning
Information encoding in the nervous system is supported through the precise spike-timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains unclear.
no code implementations • 10 Feb 2012 • Ioana Sporea, André Grüning
The current article introduces a supervised learning algorithm for multilayer spiking neural networks.