no code implementations • 7 May 2024 • Hamed Poursiami, Ihsen Alouani, Maryam Parsa
As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial.
no code implementations • 18 Mar 2024 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique
In this paper, we introduce SSAP (Shape-Sensitive Adversarial Patch), a novel approach designed to comprehensively disrupt monocular depth estimation (MDE) in autonomous navigation applications.
no code implementations • 1 Feb 2024 • Hamed Poursiami, Ihsen Alouani, Maryam Parsa
Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model.
no code implementations • 4 Jan 2024 • Behnam Omidi, Khaled N. Khasawneh, Ihsen Alouani
We introduce a HT obfuscation (HTO) approach to allow HTs to bypass this detection method.
no code implementations • 12 Dec 2023 • Ayoub Arous, Andres F Lopez-Lopera, Nael Abu-Ghazaleh, Ihsen Alouani
We model the propagation of noise through the layers, introducing a closed-form stochastic loss function that encapsulates a noise variance parameter.
no code implementations • 30 Nov 2023 • Bilel Tarchoun, Quazi Mishkatul Alam, Nael Abu-Ghazaleh, Ihsen Alouani
Adversarial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
no code implementations • 21 Nov 2023 • Quazi Mishkatul Alam, Bilel Tarchoun, Ihsen Alouani, Nael Abu-Ghazaleh
The latest generation of transformer-based vision models has proven to be superior to Convolutional Neural Network (CNN)-based models across several vision tasks, largely attributed to their remarkable prowess in relation modeling.
no code implementations • 17 Jul 2023 • Md Abdullah Al Mamun, Quazi Mishkatul Alam, Erfan Shaigani, Pedram Zaree, Ihsen Alouani, Nael Abu-Ghazaleh
In this paper, we propose a novel information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization.
no code implementations • 19 May 2023 • Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems.
no code implementations • CVPR 2023 • Bilel Tarchoun, Anouar Ben Khalifa, Mohamed Ali Mahjoub, Nael Abu-Ghazaleh, Ihsen Alouani
Jedi tackles the patch localization problem from an information theory perspective; leverages two new ideas: (1) it improves the identification of potential patch regions using entropy analysis: we show that the entropy of adversarial patches is high, even in naturalistic patches; and (2) it improves the localization of adversarial patches, using an autoencoder that is able to complete patch regions from high entropy kernels.
no code implementations • 3 Mar 2023 • Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models.
no code implementations • 3 Mar 2023 • Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, Ihsen Alouani
Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations.
no code implementations • 2 Mar 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
APARATE, results in a mean depth estimation error surpassing $0. 5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models.
no code implementations • 18 Apr 2022 • Shail Dave, Alberto Marchisio, Muhammad Abdullah Hanif, Amira Guesmi, Aviral Shrivastava, Ihsen Alouani, Muhammad Shafique
The real-world use cases of Machine Learning (ML) have exploded over the past few years.
no code implementations • 5 Jan 2022 • Amira Guesmi, Khaled N. Khasawneh, Nael Abu-Ghazaleh, Ihsen Alouani
Thus, we propose ROOM, a novel Real-time Online-Offline attack construction Model where an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.
no code implementations • 10 Oct 2021 • Bilel Tarchoun, Ihsen Alouani, Anouar Ben Khalifa, Mohamed Ali Mahjoub
In this paper, we study the effect of view angle on the effectiveness of an adversarial patch.
no code implementations • 27 Jul 2021 • Nicolas Fleury, Theo Dubrunquez, Ihsen Alouani
Since classical analysis techniques may be limited in case of zero-days, machine-learning based techniques have emerged recently as an automatic PDF-malware detection method that is able to generalize from a set of training samples.
no code implementations • 11 Mar 2021 • Md Shohidul Islam, Ihsen Alouani, Khaled N. Khasawneh
Machine learning-based hardware malware detectors (HMDs) offer a potential game changing advantage in defending systems against malware.
no code implementations • 5 Jan 2021 • Ihsen Alouani, Anouar Ben Khalifa, Farhad Merchant, Rainer Leupers
Moreover, in 100% of the tested machine-learning applications, the accuracy of posit-implemented systems is higher than the classical floating-point-based ones.
Hardware Architecture
1 code implementation • 9 Dec 2020 • Rida El-Allami, Alberto Marchisio, Muhammad Shafique, Ihsen Alouani
We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters.
1 code implementation • 13 Jun 2020 • Amira Guesmi, Ihsen Alouani, Khaled Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh
We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios.
no code implementations • 16 May 2020 • Valerio Venceslai, Alberto Marchisio, Ihsen Alouani, Maurizio Martina, Muhammad Shafique
Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems.