no code implementations • 27 Jul 2023 • Aminollah Khormali, Jiann-Shiun Yuan
To assess the effectiveness of the proposed framework, several challenging experiments are conducted, including in-data distribution performance, cross-dataset, cross-manipulation generalization, and robustness against common post-production perturbations.
1 code implementation • 6 Jan 2023 • Lin Qiu, Aminollah Khormali, Kai Liu
The integration of multi-modal data, such as pathological images and genomic data, is essential for understanding cancer heterogeneity and complexity for personalized treatments, as well as for enhancing survival predictions.
no code implementations • 30 Jun 2020 • Aminollah Khormali, DaeHun Nyang, David Mohaisen
However, deep learning models are vulnerable to Adversarial Examples (AEs), carefully crafted samples to deceive those models.
no code implementations • 20 Sep 2019 • Aminollah Khormali, Ahmed Abusnaina, Songqing Chen, DaeHun Nyang, Aziz Mohaisen
Therefore, we proposed an approach to generate adversarial examples, COPYCAT, which is specifically designed for malware detection systems considering two main goals; achieving a high misclassification rate and maintaining the executability and functionality of the original input.
no code implementations • 12 Feb 2019 • Ahmed Abusnaina, Aminollah Khormali, Hisham Alasmary, Jeman Park, Afsah Anwar, Ulku Meteriz, Aziz Mohaisen
The main goal of this study is to investigate the robustness of graph-based Deep Learning (DL) models used for Internet of Things (IoT) malware classification against Adversarial Learning (AL).