Search Results for author: Gil Fidel

Found 3 papers, 0 papers with code

Improving Interpretability via Regularization of Neural Activation Sensitivity

no code implementations16 Nov 2022 Ofir Moshe, Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques.

Adversarial Robustness Explanation Fidelity Evaluation +1

Adversarial robustness via stochastic regularization of neural activation sensitivity

no code implementations23 Sep 2020 Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai

Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.

Adversarial Robustness

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

no code implementations8 Sep 2019 Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate our method by building an extensive dataset of adversarial examples over the popular CIFAR-10 and MNIST datasets, and training a neural network-based detector to distinguish between normal and adversarial inputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.