Search Results for author: Amir Rahmati

Found 12 papers, 4 papers with code

Accelerating Certified Robustness Training via Knowledge Transfer

1 code implementation25 Oct 2022 Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati

Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems.

Transfer Learning

Ares: A System-Oriented Wargame Framework for Adversarial ML

1 code implementation24 Oct 2022 Farhan Ahmed, Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati

Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML models against adversarial attacks, and adversaries, who seek to develop better attacks capable of weakening or defeating these defenses.

Transferring Adversarial Robustness Through Robust Representation Matching

1 code implementation21 Feb 2022 Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati

On CIFAR-10, RRM trains a robust model $\sim 1. 8\times$ faster than the state-of-the-art.

Adversarial Robustness

Can Attention Masks Improve Adversarial Robustness?

no code implementations27 Nov 2019 Pratik Vaishnavi, Tianji Cong, Kevin Eykholt, Atul Prakash, Amir Rahmati

Focusing on the observation that discrete pixelization in MNIST makes the background completely black and foreground completely white, we hypothesize that the important property for increasing robustness is the elimination of image background using attention masks before classifying an object.

Adversarial Robustness

Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders

no code implementations12 Sep 2019 Pratik Vaishnavi, Kevin Eykholt, Atul Prakash, Amir Rahmati

Numerous techniques have been proposed to harden machine learning algorithms and mitigate the effect of adversarial attacks.

Adversarial Defense Adversarial Robustness +1

Robust Classification using Robust Feature Augmentation

no code implementations26 May 2019 Kevin Eykholt, Swati Gupta, Atul Prakash, Amir Rahmati, Pratik Vaishnavi, Haizhong Zheng

Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image.

Binarization Classification +3

Physical Adversarial Examples for Object Detectors

no code implementations20 Jul 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

Object object-detection +1

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Tyche: Risk-Based Permissions for Smart Home Platforms

no code implementations14 Jan 2018 Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Atul Prakash

When using risk-based permissions, device operations are grouped into units of similar risk, and users grant apps access to devices at that risk-based granularity.

Cryptography and Security

Note on Attacking Object Detectors with Adversarial Stickers

no code implementations21 Dec 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.

Object

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

Internet of Things Security Research: A Rehash of Old Ideas or New Intellectual Challenges?

no code implementations23 May 2017 Earlence Fernandes, Amir Rahmati, Kevin Eykholt, Atul Prakash

The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.