Search Results for author: Khaza Anuarul Hoque

Found 12 papers, 3 papers with code

VR-LENS: Super Learning-based Cybersickness Detection and Explainable AI-Guided Deployment in Virtual Reality

no code implementations3 Feb 2023 Ripan Kumar Kundu, Osama Yahia Elsaid, Prasad Calyam, Khaza Anuarul Hoque

Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

RobustPdM: Designing Robust Predictive Maintenance against Adversarial Attacks

no code implementations25 Jan 2023 Ayesha Siddique, Ripan Kumar Kundu, Gautam Raj Mode, Khaza Anuarul Hoque

We observe that approximate adversarial training can significantly improve the robustness of PdM models (up to 54X) and outperforms the state-of-the-art PdM defense methods by offering 3X more robustness.

Adversarial Defense

Security-Aware Approximate Spiking Neural Networks

no code implementations12 Jan 2023 Syed Tihaam Ahmad, Ayesha Siddique, Khaza Anuarul Hoque

Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks.

Quantization

Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization

no code implementations12 Jan 2023 Ayesha Siddique, Khaza Anuarul Hoque

Our proposed FalVolt mitigation method improves the performance of systolicSNNs by enabling them to operate at fault rates of up to 60\%, with a negligible drop in classification accuracy (as low as 0. 1\%).

TruVR: Trustworthy Cybersickness Detection using Explainable Machine Learning

no code implementations12 Sep 2022 Ripan Kumar Kundu, Rifatul Islam, Prasad Calyam, Khaza Anuarul Hoque

The results show that the EBM can detect cybersickness with an accuracy of 99. 75% and 94. 10% for the physiological and gameplay datasets, respectively.

regression

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

no code implementations2 Dec 2021 Ayesha Siddique, Khaza Anuarul Hoque

Approximate computing is known for its effectiveness in improvising the energy efficiency of deep neural network (DNN) accelerators at the cost of slight accuracy loss.

Adversarial Robustness

Exploring Fault-Energy Trade-offs in Approximate DNN Hardware Accelerators

no code implementations8 Jan 2021 Ayesha Siddique, Kanad Basu, Khaza Anuarul Hoque

Our quantitative analysis shows that the permanent faults exacerbate the accuracy loss in AxDNNs when compared to the accurate DNN accelerators.

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

1 code implementation24 Sep 2020 Gautam Raj Mode, Khaza Anuarul Hoque

Due to the tremendous success of deep learning (DL) algorithms in various domains including image recognition and computer vision, researchers started adopting these techniques for solving MTS data mining problems, many of which are targeted for safety-critical and cost-critical applications.

Adversarial Attack Image Classification +3

Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version)

1 code implementation21 Sep 2020 Gautam Raj Mode, Khaza Anuarul Hoque

The obtained results show that all the evaluated PHM models are vulnerable to adversarial attacks and can cause a serious defect in the remaining useful life estimation.

Management

High-level Modeling of Manufacturing Faults in Deep Neural Network Accelerators

no code implementations5 Jun 2020 Shamik Kundu, Ahmet Soyyiğit, Khaza Anuarul Hoque, Kanad Basu

The advent of data-driven real-time applications requires the implementation of Deep Neural Networks (DNNs) on Machine Learning accelerators.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.