no code implementations • 18 May 2023 • Harsimrat Kaeley, Ye Qiao, Nader Bagherzadeh
However, due to the limitations of RNNs, such as gradient vanish and long-term dependencies being lost as sequence length increases, in this paper we develop a Transformer based model that uses technical stock data and sentiment analysis to conduct accurate stock trend prediction over long time windows.
no code implementations • 9 Mar 2023 • Shima Nabiee, Nader Bagherzadeh
However, semantic segmentation and its well-designed fully convolutional networks have never been studied for time-series dense classification.
no code implementations • 26 Jul 2022 • Ye Qiao, Mohammed Alnemari, Nader Bagherzadeh
This paper proposes a novel two-stage framework for emotion recognition using EEG data that outperforms state-of-the-art models while keeping the model size small and computationally efficient.
1 code implementation • 18 Feb 2021 • Raul Murillo, Alberto A. Del Barrio, Guillermo Botella, Min Soo Kim, HyunJin Kim, Nader Bagherzadeh
The Posit Number System was introduced in 2017 as a replacement for floating-point numbers.
1 code implementation • 20 Jul 2020 • Min Soo Kim, Alberto A. Del Barrio, HyunJin Kim, Nader Bagherzadeh
The approximate multiplication can reduce the cost of the underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators.
no code implementations • 14 Jan 2020 • Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh
We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline.
no code implementations • 21 Jan 2019 • Sina Shahhosseini, Ahmad Albaqsami, Masoomeh Jasemi, Nader Bagherzadeh
We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7. 72x speed up of performance and a 2. 73x reduction in the energy used for computing pruned layers of TinyVGG16 in comparison to running the unpruned model on a single accelerator.