no code implementations • 16 Nov 2022 • Ofir Moshe, Gil Fidel, Ron Bitton, Asaf Shabtai
We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques.
no code implementations • 23 Sep 2020 • Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai
Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.
no code implementations • 8 Sep 2019 • Gil Fidel, Ron Bitton, Asaf Shabtai
We evaluate our method by building an extensive dataset of adversarial examples over the popular CIFAR-10 and MNIST datasets, and training a neural network-based detector to distinguish between normal and adversarial inputs.