Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

ICLR 2019 1 code implementation

Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.

TRANSFER LEARNING

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

14 Aug 20174 code implementations

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE AUTONOMOUS DRIVING DIMENSIONALITY REDUCTION IMAGE CLASSIFICATION

Very Deep Convolutional Networks for Large-Scale Image Recognition

4 Sep 2014127 code implementations

In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.

IMAGE CLASSIFICATION

Aggregated Residual Transformations for Deep Neural Networks

CVPR 2017 20 code implementations

Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set.

IMAGE CLASSIFICATION

Hacking Neural Networks: A Short Introduction

18 Nov 20191 code implementation

A large chunk of research on the security issues of neural networks is focused on adversarial attacks.

NEURAL NETWORK SECURITY

Crypto-Oriented Neural Architecture Design

27 Nov 20191 code implementation

We take a complementary approach, and provide design principles for optimizing the crypto-oriented neural network architectures to reduce the runtime of secure inference.

Adversarial Robustness Toolbox v1.0.0

3 Jul 20183 code implementations

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.

GAUSSIAN PROCESSES TIME SERIES

Towards Deep Learning Models Resistant to Adversarial Attacks

ICLR 2018 16 code implementations

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

ADVERSARIAL DEFENSE

Feature Denoising for Improving Adversarial Robustness

CVPR 2019 1 code implementation

This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks.

ADVERSARIAL DEFENSE IMAGE CLASSIFICATION