no code implementations • 23 Oct 2022 • Shibani Hamsa, Ismail Shahin, Youssef Iraqi, Ernesto Damiani, Naoufel Werghi
Speech signals are subjected to more acoustic interference and emotional factors than other signals.
no code implementations • 26 Dec 2021 • Ismail Shahin, Ali Bou Nassif, Nawel Nemmour, Ashraf Elnagar, Adi Alhudhaif, Kemal Polat
The test results of the aforementioned hybrid models demonstrated that the proposed HMM-DNN leveraged the verification performance in emotional and stressful environments.
no code implementations • 26 Dec 2021 • Ismail Shahin, Noor Hindawi, Ali Bou Nassif, Adi Alhudhaif, Kemal Polat
Using the Arabic Emirati-accented corpus, our results demonstrate that the proposed work yields average emotion recognition accuracy of 89. 3% compared to 84. 7%, 82. 2%, 69. 8%, 69. 2%, 53. 8%, 42. 6%, and 31. 9% based on CapsNet, CNN, support vector machine, multi-layer perceptron, k-nearest neighbor, radial basis function, and naive Bayes, respectively.
no code implementations • 15 Dec 2021 • Noor Ahmad Al Hindawi, Ismail Shahin, Ali Bou Nassif
Due to improvements in artificial intelligence, speaker identification (SI) technologies have brought a great direction and are now widely used in a variety of sectors.
no code implementations • 15 Dec 2021 • Ismail Shahin, Ali Bou Nassif, Mohamed Bader Alsabek
In this study, a novel approach is proposed to automatically diagnose the COVID-19 by the utilization of Electrocardiogram (ECG) data with the integration of deep learning algorithms, specifically the Convolutional Neural Network (CNN) models.
no code implementations • 11 Feb 2021 • Ali Bou Nassif, Ismail Shahin, Shibani Hamsa, Nawel Nemmour, Keikichi Hirose
This work aims at intensifying text-independent speaker identification performance in real application situations such as noisy and emotional talking conditions.
no code implementations • 17 Oct 2020 • Mohamed Bader, Ismail Shahin, Abdelfatah Hassan
Recently there has been a formidable work which has been put up from the people who are working in the frontlines such as hospitals, clinics, and labs alongside researchers and scientists who are also putting tremendous efforts in the fight against COVID-19 pandemic.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 4 Feb 2020 • Ismail Shahin
This research aims at identifying the unknown emotion using speaker cues.
no code implementations • 28 Sep 2019 • Ismail Shahin, Ali Bou Nassif
This research is dedicated to improving text-independent Emirati-accented speaker identification performance in stressful talking conditions using three distinct classifiers: First-Order Hidden Markov Models (HMM1s), Second-Order Hidden Markov Models (HMM2s), and Third-Order Hidden Markov Models (HMM3s).
no code implementations • 3 Sep 2018 • Ismail Shahin, Ali Bou Nassif
In this work, a three-stage speaker verification architecture has been proposed to enhance speaker verification performance in emotional environments.
no code implementations • 29 Jun 2017 • Ismail Shahin
It is well known that speaker identification performs extremely well in the neutral talking environments; however, the identification performance is declined sharply in the shouted talking environments.