no code implementations • 4 Feb 2024 • Oryan Yehezkel, Alon Zolfi, Amit Baras, Yuval Elovici, Asaf Shabtai
In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms.
no code implementations • 3 Dec 2023 • Amit Baras, Alon Zolfi, Yuval Elovici, Asaf Shabtai
However, their dynamic behavior and average-case performance assumption makes them vulnerable to a novel threat vector -- adversarial attacks that target the model's efficiency and availability.
no code implementations • 5 Dec 2022 • Alon Zolfi, Guy Amit, Amit Baras, Satoru Koda, Ikuya Morikawa, Yuval Elovici, Asaf Shabtai
In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task.
no code implementations • 16 Nov 2022 • Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai
However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.
1 code implementation • 26 May 2022 • Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai
Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.
1 code implementation • 21 Nov 2021 • Alon Zolfi, Shai Avidan, Yuval Elovici, Asaf Shabtai
In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets.
no code implementations • CVPR 2021 • Alon Zolfi, Moshe Kravchik, Yuval Elovici, Asaf Shabtai
Therefore, in our experiments, which are conducted on state-of-the-art object detection models used in autonomous driving, we study the effect of the patch on the detection of both the selected target class and the other classes.