Search Results for author: Ron Bitton

Found 11 papers, 1 papers with code

The Adversarial Implications of Variable-Time Inference

1 code implementation5 Sep 2023 Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi

In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks.

object-detection Object Detection

Latent SHAP: Toward Practical Human-Interpretable Explanations

no code implementations27 Nov 2022 Ron Bitton, Alon Malach, Amiel Meiseles, Satoru Momiyama, Toshinori Araki, Jun Furukawa, Yuval Elovici, Asaf Shabtai

Model agnostic feature attribution algorithms (such as SHAP and LIME) are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks.

Classification

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

no code implementations16 Nov 2022 Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai

However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.

Object

Improving Interpretability via Regularization of Neural Activation Sensitivity

no code implementations16 Nov 2022 Ofir Moshe, Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques.

Adversarial Robustness Explanation Fidelity Evaluation +1

Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN)

no code implementations16 Jan 2022 Edan Habler, Ron Bitton, Dan Avraham, Dudu Mimran, Eitan Klevansky, Oleg Brodt, Heiko Lehmann, Yuval Elovici, Asaf Shabtai

Next, we explore the various AML threats associated with O-RAN and review a large number of attacks that can be performed to realize these threats and demonstrate an AML attack on a traffic steering model.

Anomaly Detection BIG-bench Machine Learning

Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems

no code implementations5 Jul 2021 Ron Bitton, Nadav Maman, Inderjeet Singh, Satoru Momiyama, Yuval Elovici, Asaf Shabtai

Using the extension, security practitioners can apply attack graph analysis methods in environments that include ML components; thus, providing security practitioners with a methodological and practical tool for evaluating the impact and quantifying the risk of a cyberattack targeting an ML production system.

BIG-bench Machine Learning Graph Generation

Adversarial robustness via stochastic regularization of neural activation sensitivity

no code implementations23 Sep 2020 Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai

Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.

Adversarial Robustness

An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions

no code implementations10 Aug 2020 Hodaya Binyamini, Ron Bitton, Masaki Inokuchi, Tomohiko Yagyu, Yuval Elovici, Asaf Shabtai

Given a description of a security vulnerability, the proposed framework first extracts the relevant attack entities required to model the attack, completes missing information on the vulnerability, and derives a new interaction rule that models the attack; this new rule is integrated within MulVAL attack graph tool.

Autosploit: A Fully Automated Framework for Evaluating the Exploitability of Security Vulnerabilities

no code implementations30 Jun 2020 Noam Moscovich, Ron Bitton, Yakov Mallah, Masaki Inokuchi, Tomohiko Yagyu, Meir Kalech, Yuval Elovici, Asaf Shabtai

The results show that Autosploit is able to automatically identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.

GIM: Gaussian Isolation Machines

no code implementations6 Feb 2020 Guy Amit, Ishai Rosenberg, Moshe Levy, Ron Bitton, Asaf Shabtai, Yuval Elovici

In many cases, neural network classifiers are likely to be exposed to input data that is outside of their training distribution data.

Benchmarking General Classification +1

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

no code implementations8 Sep 2019 Gil Fidel, Ron Bitton, Asaf Shabtai

We evaluate our method by building an extensive dataset of adversarial examples over the popular CIFAR-10 and MNIST datasets, and training a neural network-based detector to distinguish between normal and adversarial inputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.