Search Results for author: Alon Zolfi

Found 7 papers, 2 papers with code

DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers

no code implementations4 Feb 2024 Oryan Yehezkel, Alon Zolfi, Amit Baras, Yuval Elovici, Asaf Shabtai

In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms.

Adversarial Attack Image Classification +2

QuantAttack: Exploiting Dynamic Quantization to Attack Vision Transformers

no code implementations3 Dec 2023 Amit Baras, Alon Zolfi, Yuval Elovici, Asaf Shabtai

However, their dynamic behavior and average-case performance assumption makes them vulnerable to a novel threat vector -- adversarial attacks that target the model's efficiency and availability.

Quantization

YolOOD: Utilizing Object Detection Concepts for Multi-Label Out-of-Distribution Detection

no code implementations5 Dec 2022 Alon Zolfi, Guy Amit, Amit Baras, Satoru Koda, Ikuya Morikawa, Yuval Elovici, Asaf Shabtai

In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task.

Classification Multi-class Classification +6

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

no code implementations16 Nov 2022 Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai

However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.

Object

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors

1 code implementation26 May 2022 Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.

Autonomous Driving Object +2

Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Model

1 code implementation21 Nov 2021 Alon Zolfi, Shai Avidan, Yuval Elovici, Asaf Shabtai

In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets.

Face Recognition Real-World Adversarial Attack

The Translucent Patch: A Physical and Universal Attack on Object Detectors

no code implementations CVPR 2021 Alon Zolfi, Moshe Kravchik, Yuval Elovici, Asaf Shabtai

Therefore, in our experiments, which are conducted on state-of-the-art object detection models used in autonomous driving, we study the effect of the patch on the detection of both the selected target class and the other classes.

Autonomous Driving Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.