Search Results for author: Nader Bagherzadeh

Found 7 papers, 2 papers with code

Support for Stock Trend Prediction Using Transformers and Sentiment Analysis

no code implementations18 May 2023 Harsimrat Kaeley, Ye Qiao, Nader Bagherzadeh

However, due to the limitations of RNNs, such as gradient vanish and long-term dependencies being lost as sequence length increases, in this paper we develop a Transformer based model that uses technical stock data and sentiment analysis to conduct accurate stock trend prediction over long time windows.

Sentiment Analysis Stock Prediction +3

Stock Trend Prediction: A Semantic Segmentation Approach

no code implementations9 Mar 2023 Shima Nabiee, Nader Bagherzadeh

However, semantic segmentation and its well-designed fully convolutional networks have never been studied for time-series dense classification.

Semantic Segmentation Stock Trend Prediction +1

A Two-Stage Efficient 3-D CNN Framework for EEG Based Emotion Recognition

no code implementations26 Jul 2022 Ye Qiao, Mohammed Alnemari, Nader Bagherzadeh

This paper proposes a novel two-stage framework for emotion recognition using EEG data that outperforms state-of-the-art models while keeping the model size small and computationally efficient.

EEG Emotion Recognition

PLAM: a Posit Logarithm-Approximate Multiplier

1 code implementation18 Feb 2021 Raul Murillo, Alberto A. Del Barrio, Guillermo Botella, Min Soo Kim, HyunJin Kim, Nader Bagherzadeh

The Posit Number System was introduced in 2017 as a replacement for floating-point numbers.

The Effects of Approximate Multiplication on Convolutional Neural Networks

1 code implementation20 Jul 2020 Min Soo Kim, Alberto A. Del Barrio, HyunJin Kim, Nader Bagherzadeh

The approximate multiplication can reduce the cost of the underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators.

Reliable and Energy Efficient MLC STT-RAM Buffer for CNN Accelerators

no code implementations14 Jan 2020 Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh

We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline.

Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks

no code implementations21 Jan 2019 Sina Shahhosseini, Ahmad Albaqsami, Masoomeh Jasemi, Nader Bagherzadeh

We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7. 72x speed up of performance and a 2. 73x reduction in the energy used for computing pruned layers of TinyVGG16 in comparison to running the unpruned model on a single accelerator.

Cannot find the paper you are looking for? You can Submit a new open access paper.