In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts.
In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS).
SOTA for Network Intrusion Detection on KDD
Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.
However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.
#87 best model for Image Classification on ImageNet
Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set.
#31 best model for Image Classification on ImageNet
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0. 5\%$.