Search Results for author: Ihsen Alouani

Found 22 papers, 2 papers with code

Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks

no code implementations7 May 2024 Hamed Poursiami, Ihsen Alouani, Maryam Parsa

As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial.

SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications

no code implementations18 Mar 2024 Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique

In this paper, we introduce SSAP (Shape-Sensitive Adversarial Patch), a novel approach designed to comprehensively disrupt monocular depth estimation (MDE) in autonomous navigation applications.

Autonomous Driving Autonomous Navigation +2

BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks

no code implementations1 Feb 2024 Hamed Poursiami, Ihsen Alouani, Maryam Parsa

Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model.

Privacy Preserving

Evasive Hardware Trojan through Adversarial Power Trace

no code implementations4 Jan 2024 Behnam Omidi, Khaled N. Khasawneh, Ihsen Alouani

We introduce a HT obfuscation (HTO) approach to allow HTs to bypass this detection method.

Side Channel Analysis

May the Noise be with you: Adversarial Training without Adversarial Examples

no code implementations12 Dec 2023 Ayoub Arous, Andres F Lopez-Lopera, Nael Abu-Ghazaleh, Ihsen Alouani

We model the propagation of noise through the layers, introducing a closed-form stochastic loss function that encapsulates a noise variance parameter.

Fool the Hydra: Adversarial Attacks against Multi-view Object Detection Systems

no code implementations30 Nov 2023 Bilel Tarchoun, Quazi Mishkatul Alam, Nael Abu-Ghazaleh, Ihsen Alouani

Adversarial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.

object-detection Object Detection +1

Attention Deficit is Ordered! Fooling Deformable Vision Transformers with Collaborative Adversarial Patches

no code implementations21 Nov 2023 Quazi Mishkatul Alam, Bilel Tarchoun, Ihsen Alouani, Nael Abu-Ghazaleh

The latest generation of transformer-based vision models has proven to be superior to Convolutional Neural Network (CNN)-based models across several vision tasks, largely attributed to their remarkable prowess in relation modeling.

object-detection Object Detection

DeepMem: ML Models as storage channels and their (mis-)applications

no code implementations17 Jul 2023 Md Abdullah Al Mamun, Quazi Mishkatul Alam, Erfan Shaigani, Pedram Zaree, Ihsen Alouani, Nael Abu-Ghazaleh

In this paper, we propose a novel information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization.

Data Augmentation

DAP: A Dynamic Adversarial Patch for Evading Person Detectors

no code implementations19 May 2023 Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems.

Jedi: Entropy-based Localization and Removal of Adversarial Patches

no code implementations CVPR 2023 Bilel Tarchoun, Anouar Ben Khalifa, Mohamed Ali Mahjoub, Nael Abu-Ghazaleh, Ihsen Alouani

Jedi tackles the patch localization problem from an information theory perspective; leverages two new ideas: (1) it improves the identification of potential patch regions using entropy analysis: we show that the entropy of adversarial patches is high, even in naturalistic patches; and (2) it improves the localization of adversarial patches, using an autoencoder that is able to complete patch regions from high entropy kernels.

Exploring Machine Learning Privacy/Utility trade-off from a hyperparameters Lens

no code implementations3 Mar 2023 Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models.

Privacy Preserving

AdvART: Adversarial Art for Camouflaged Object Detection Attacks

no code implementations3 Mar 2023 Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, Ihsen Alouani

Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations.

Object object-detection +1

APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation

no code implementations2 Mar 2023 Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

APARATE, results in a mean depth estimation error surpassing $0. 5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models.

Autonomous Driving Autonomous Navigation +3

ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

no code implementations5 Jan 2022 Amira Guesmi, Khaled N. Khasawneh, Nael Abu-Ghazaleh, Ihsen Alouani

Thus, we propose ROOM, a novel Real-time Online-Offline attack construction Model where an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.

Adversarial Attack BIG-bench Machine Learning

PDF-Malware: An Overview on Threats, Detection and Evasion Attacks

no code implementations27 Jul 2021 Nicolas Fleury, Theo Dubrunquez, Ihsen Alouani

Since classical analysis techniques may be limited in case of zero-days, machine-learning based techniques have emerged recently as an automatic PDF-malware detection method that is able to generalize from a set of training samples.

Malware Detection

Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling

no code implementations11 Mar 2021 Md Shohidul Islam, Ihsen Alouani, Khaled N. Khasawneh

Machine learning-based hardware malware detectors (HMDs) offer a potential game changing advantage in defending systems against malware.

Adversarial Attack

An Investigation on Inherent Robustness of Posit Data Representation

no code implementations5 Jan 2021 Ihsen Alouani, Anouar Ben Khalifa, Farhad Merchant, Rainer Leupers

Moreover, in 100% of the tested machine-learning applications, the accuracy of posit-implemented systems is higher than the classical floating-point-based ones.

Hardware Architecture

Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

1 code implementation9 Dec 2020 Rida El-Allami, Alberto Marchisio, Muhammad Shafique, Ihsen Alouani

We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters.

Defensive Approximation: Securing CNNs using Approximate Computing

1 code implementation13 Jun 2020 Amira Guesmi, Ihsen Alouani, Khaled Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh

We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.