Search Results for author: Hava T. Siegelmann

Found 13 papers, 4 papers with code

Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks

no code implementations15 Feb 2024 Arjun Karuvally, Terrence J. Sejnowski, Hava T. Siegelmann

Traveling waves are a fundamental phenomenon in the brain, playing a crucial role in short-term information storage.

Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks

no code implementations3 Oct 2023 Arjun Karuvally, Peter DelMastro, Hava T. Siegelmann

Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks.

On the Dynamics of Learning Time-Aware Behavior with Recurrent Neural Networks

no code implementations12 Jun 2023 Peter DelMastro, Rushiv Arora, Edward Rietman, Hava T. Siegelmann

In this way, we demonstrate how dynamical systems theory can provide insights into not only the learned representations of these models, but also the dynamics of the learning process itself.

Energy-based General Sequential Episodic Memory Networks at the Adiabatic Limit

no code implementations11 Dec 2022 Arjun Karuvally, Terry J. Sejnowski, Hava T. Siegelmann

We introduce a new class of General Sequential Episodic Memory Models (GSEMM) that, in the adiabatic limit, exhibit temporally changing energy surface, leading to a series of meta-stable states that are sequential episodic memories.

Signal Propagation: A Framework for Learning and Inference In a Forward Pass

1 code implementation4 Apr 2022 Adam Kohan, Edward A. Rietman, Hava T. Siegelmann

To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with Hebbian updates, and train spiking neural networks with only the voltage or with biologically and hardware compatible surrogate functions.

Replay in Deep Learning: Current Approaches and Missing Biological Elements

no code implementations1 Apr 2021 Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann, Terrence J. Sejnowski, Christopher Kanan

Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences.

Retrieval

Minibatch Processing in Spiking Neural Networks

1 code implementation5 Sep 2019 Daniel J. Saunders, Cooper Sigrist, Kenneth Chaney, Robert Kozma, Hava T. Siegelmann

To our knowledge, this is the first general-purpose implementation of mini-batch processing in a spiking neural networks simulator, which works with arbitrary neuron and synapse models.

Locally Connected Spiking Neural Networks for Unsupervised Feature Learning

no code implementations12 Apr 2019 Daniel J. Saunders, Devdhar Patel, Hananel Hazan, Hava T. Siegelmann, Robert Kozma

In recent years, Spiking Neural Networks (SNNs) have demonstrated great successes in completing various Machine Learning tasks.

General Classification

STDP Learning of Image Patches with Convolutional Spiking Neural Networks

no code implementations24 Aug 2018 Daniel J. Saunders, Hava T. Siegelmann, Robert Kozma, Miklós Ruszinkó

Spiking neural networks are motivated from principles of neural systems and may possess unexplored advantages in the context of machine learning.

BIG-bench Machine Learning

Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

no code implementations9 Aug 2018 Adam A. Kohan, Edward A. Rietman, Hava T. Siegelmann

This mechanism, Error Forward-Propagation, is a plausible basis for how error feedback occurs deep in the brain independent of and yet in support of the functionality underlying intricate network architectures.

Unsupervised Learning with Self-Organizing Spiking Neural Networks

no code implementations24 Jul 2018 Hananel Hazan, Daniel J. Saunders, Darpan T. Sanghavi, Hava T. Siegelmann, Robert Kozma

We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs.

General Classification

BindsNET: A machine learning-oriented spiking neural networks library in Python

1 code implementation4 Jun 2018 Hananel Hazan, Daniel J. Saunders, Hassaan Khan, Darpan T. Sanghavi, Hava T. Siegelmann, Robert Kozma

In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared towards machine learning and reinforcement learning.

BIG-bench Machine Learning Neural Network simulation +3

Support Vector Clustering

1 code implementation JMLR 2001 Asa Ben-Hur, David Horn, Hava T. Siegelmann, Vladimir Vapnik

We present a novel clustering method using the approach of support vector machines.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.