1 code implementation • 5 Sep 2023 • Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi
In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks.
no code implementations • 27 Nov 2022 • Ron Bitton, Alon Malach, Amiel Meiseles, Satoru Momiyama, Toshinori Araki, Jun Furukawa, Yuval Elovici, Asaf Shabtai
Model agnostic feature attribution algorithms (such as SHAP and LIME) are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks.
no code implementations • 16 Nov 2022 • Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai
However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.
no code implementations • 16 Nov 2022 • Ofir Moshe, Gil Fidel, Ron Bitton, Asaf Shabtai
We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques.
no code implementations • 16 Jan 2022 • Edan Habler, Ron Bitton, Dan Avraham, Dudu Mimran, Eitan Klevansky, Oleg Brodt, Heiko Lehmann, Yuval Elovici, Asaf Shabtai
Next, we explore the various AML threats associated with O-RAN and review a large number of attacks that can be performed to realize these threats and demonstrate an AML attack on a traffic steering model.
no code implementations • 5 Jul 2021 • Ron Bitton, Nadav Maman, Inderjeet Singh, Satoru Momiyama, Yuval Elovici, Asaf Shabtai
Using the extension, security practitioners can apply attack graph analysis methods in environments that include ML components; thus, providing security practitioners with a methodological and practical tool for evaluating the impact and quantifying the risk of a cyberattack targeting an ML production system.
no code implementations • 23 Sep 2020 • Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai
Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.
no code implementations • 10 Aug 2020 • Hodaya Binyamini, Ron Bitton, Masaki Inokuchi, Tomohiko Yagyu, Yuval Elovici, Asaf Shabtai
Given a description of a security vulnerability, the proposed framework first extracts the relevant attack entities required to model the attack, completes missing information on the vulnerability, and derives a new interaction rule that models the attack; this new rule is integrated within MulVAL attack graph tool.
no code implementations • 30 Jun 2020 • Noam Moscovich, Ron Bitton, Yakov Mallah, Masaki Inokuchi, Tomohiko Yagyu, Meir Kalech, Yuval Elovici, Asaf Shabtai
The results show that Autosploit is able to automatically identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
no code implementations • 6 Feb 2020 • Guy Amit, Ishai Rosenberg, Moshe Levy, Ron Bitton, Asaf Shabtai, Yuval Elovici
In many cases, neural network classifiers are likely to be exposed to input data that is outside of their training distribution data.
no code implementations • 8 Sep 2019 • Gil Fidel, Ron Bitton, Asaf Shabtai
We evaluate our method by building an extensive dataset of adversarial examples over the popular CIFAR-10 and MNIST datasets, and training a neural network-based detector to distinguish between normal and adversarial inputs.