Search Results for author: Fathi M. Salem

Found 13 papers, 2 papers with code

Slim LSTM networks: LSTM_6 and LSTM_C6

no code implementations18 Jan 2019 Atra Akandeh, Fathi M. Salem

We have shown previously that our parameter-reduced variants of Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) are comparable in performance to the standard LSTM RNN on the MNIST dataset.

Performance of Three Slim Variants of The Long Short-Term Memory (LSTM) Layer

no code implementations2 Jan 2019 Daniel Kent, Fathi M. Salem

The Long Short-Term Memory (LSTM) layer is an important advancement in the field of neural networks and machine learning, allowing for effective training and impressive inference performance.

Translation

SLIM LSTMs

no code implementations29 Dec 2018 Fathi M. Salem

Long Short-Term Memory (LSTM) Recurrent Neural networks (RNNs) rely on gating signals, each driven by a function of a weighted sum of at least 3 components: (i) one of an adaptive weight matrix multiplied by the incoming external input vector sequence, (ii) one adaptive weight matrix multiplied by the previous memory/state vector, and (iii) one adaptive bias vector.

Simplified Long Short-term Memory Recurrent Neural Networks: part III

no code implementations14 Jul 2017 Atra Akandeh, Fathi M. Salem

In this part III paper, we present and evaluate two new LSTM model variants which dramatically reduce the computational load while retaining comparable performance to the base (standard) LSTM RNNs.

Simplified Long Short-term Memory Recurrent Neural Networks: part I

no code implementations14 Jul 2017 Atra Akandeh, Fathi M. Salem

We present five variants of the standard Long Short-term Memory (LSTM) recurrent neural networks by uniformly reducing blocks of adaptive parameters in the gating mechanisms.

Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks

1 code implementation20 Jan 2017 Rahul Dey, Fathi M. Salem

The paper evaluates three variants of the Gated Recurrent Unit (GRU) in recurrent neural networks (RNN) by reducing parameters in the update and reset gates.

Model Optimization

Simplified Minimal Gated Unit Variations for Recurrent Neural Networks

no code implementations12 Jan 2017 Joel Heck, Fathi M. Salem

Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data.

Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks

1 code implementation12 Jan 2017 Yuzhen Lu, Fathi M. Salem

The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters.

A Basic Recurrent Neural Network Model

no code implementations29 Dec 2016 Fathi M. Salem

We present a model of a basic recurrent neural network (or bRNN) that includes a separate linear term with a slightly "stable" fixed matrix to guarantee bounded solutions and fast dynamic response.

Time Series Time Series Analysis

A Blind Adaptive CDMA Receiver Based on State Space Structures

no code implementations1 Aug 2014 Zaid Albataineh, Fathi M. Salem

Code Division Multiple Access (CDMA) is a channel access method, based on spread-spectrum technology, used by various radio technologies world-wide.

A RobustICA Based Algorithm for Blind Separation of Convolutive Mixtures

no code implementations1 Aug 2014 Zaid Albataineh, Fathi M. Salem

Furthermore, we study the impact of several parameters on the performance of separation, e. g. overlapping ratio and window type of the frequency domain method.

blind source separation

Cannot find the paper you are looking for? You can Submit a new open access paper.