QUOTIENT: Two-Party Secure Neural Network Training and Prediction

8 Jul 20191 code implementation

In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts.

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

ICLR 2019 1 code implementation

Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.

TRANSFER LEARNING

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

14 Aug 20174 code implementations

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

ADVERSARIAL ATTACK ADVERSARIAL DEFENSE AUTONOMOUS DRIVING DIMENSIONALITY REDUCTION IMAGE CLASSIFICATION

Very Deep Convolutional Networks for Large-Scale Image Recognition

4 Sep 2014121 code implementations

In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.

IMAGE CLASSIFICATION

Aggregated Residual Transformations for Deep Neural Networks

CVPR 2017 21 code implementations

Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set.

IMAGE CLASSIFICATION

Adversarial Robustness Toolbox v0.4.0

3 Jul 20182 code implementations

The Adversarial Robustness Toolbox (ART) is a Python library designed to support researchers and developers in creating novel defence techniques, as well as in deploying practical defences of real-world AI systems.

TIME SERIES

Towards Deep Learning Models Resistant to Adversarial Attacks

ICLR 2018 15 code implementations

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

ADVERSARIAL DEFENSE

Feature Denoising for Improving Adversarial Robustness

CVPR 2019 1 code implementation

This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks.

ADVERSARIAL DEFENSE IMAGE CLASSIFICATION

Towards Evaluating the Robustness of Neural Networks

16 Aug 201613 code implementations

Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.

ADVERSARIAL ATTACK