Search Results for author: Ismail Shahin

Found 11 papers, 0 papers with code

Novel Hybrid DNN Approaches for Speaker Verification in Emotional and Stressful Talking Environments

no code implementations26 Dec 2021 Ismail Shahin, Ali Bou Nassif, Nawel Nemmour, Ashraf Elnagar, Adi Alhudhaif, Kemal Polat

The test results of the aforementioned hybrid models demonstrated that the proposed HMM-DNN leveraged the verification performance in emotional and stressful environments.

Text-Independent Speaker Verification

Novel Dual-Channel Long Short-Term Memory Compressed Capsule Networks for Emotion Recognition

no code implementations26 Dec 2021 Ismail Shahin, Noor Hindawi, Ali Bou Nassif, Adi Alhudhaif, Kemal Polat

Using the Arabic Emirati-accented corpus, our results demonstrate that the proposed work yields average emotion recognition accuracy of 89. 3% compared to 84. 7%, 82. 2%, 69. 8%, 69. 2%, 53. 8%, 42. 6%, and 31. 9% based on CapsNet, CNN, support vector machine, multi-layer perceptron, k-nearest neighbor, radial basis function, and naive Bayes, respectively.

Speech Emotion Recognition

The exploitation of Multiple Feature Extraction Techniques for Speaker Identification in Emotional States under Disguised Voices

no code implementations15 Dec 2021 Noor Ahmad Al Hindawi, Ismail Shahin, Ali Bou Nassif

Due to improvements in artificial intelligence, speaker identification (SI) technologies have brought a great direction and are now widely used in a variety of sectors.

Speaker Identification Voice Conversion

COVID-19 Electrocardiograms Classification using CNN Models

no code implementations15 Dec 2021 Ismail Shahin, Ali Bou Nassif, Mohamed Bader Alsabek

In this study, a novel approach is proposed to automatically diagnose the COVID-19 by the utilization of Electrocardiogram (ECG) data with the integration of deep learning algorithms, specifically the Convolutional Neural Network (CNN) models.

Classification

CASA-Based Speaker Identification Using Cascaded GMM-CNN Classifier in Noisy and Emotional Talking Conditions

no code implementations11 Feb 2021 Ali Bou Nassif, Ismail Shahin, Shibani Hamsa, Nawel Nemmour, Keikichi Hirose

This work aims at intensifying text-independent speaker identification performance in real application situations such as noisy and emotional talking conditions.

Emotion Recognition Speaker Identification

Studying the Similarity of COVID-19 Sounds based on Correlation Analysis of MFCC

no code implementations17 Oct 2020 Mohamed Bader, Ismail Shahin, Abdelfatah Hassan

Recently there has been a formidable work which has been put up from the people who are working in the frontlines such as hospitals, clinics, and labs alongside researchers and scientists who are also putting tremendous efforts in the fight against COVID-19 pandemic.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Emotion Recognition Using Speaker Cues

no code implementations4 Feb 2020 Ismail Shahin

This research aims at identifying the unknown emotion using speaker cues.

Emotion Recognition Quantization

Emirati-Accented Speaker Identification in Stressful Talking Conditions

no code implementations28 Sep 2019 Ismail Shahin, Ali Bou Nassif

This research is dedicated to improving text-independent Emirati-accented speaker identification performance in stressful talking conditions using three distinct classifiers: First-Order Hidden Markov Models (HMM1s), Second-Order Hidden Markov Models (HMM2s), and Third-Order Hidden Markov Models (HMM3s).

Speaker Identification

Three-Stage Speaker Verification Architecture in Emotional Talking Environments

no code implementations3 Sep 2018 Ismail Shahin, Ali Bou Nassif

In this work, a three-stage speaker verification architecture has been proposed to enhance speaker verification performance in emotional environments.

Speaker Verification

Speaker Identification in each of the Neutral and Shouted Talking Environments based on Gender-Dependent Approach Using SPHMMs

no code implementations29 Jun 2017 Ismail Shahin

It is well known that speaker identification performs extremely well in the neutral talking environments; however, the identification performance is declined sharply in the shouted talking environments.

Speaker Identification

Cannot find the paper you are looking for? You can Submit a new open access paper.