no code implementations • 9 Jul 2021 • Zuohui Chen, Renxuan Wang, Jingyang Xiang, Yue Yu, Xin Xia, Shouling Ji, Qi Xuan, Xiaoniu Yang
Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models.
no code implementations • ICML Workshop AML 2021 • Zuohui Chen, Renxuan Wang, Yao Lu, Jingyang Xiang, Qi Xuan
Experiments on CIFAR10 and SVHN show that the FLOPs and size of our generated model are only 24. 46\% and 4. 86\% of the original model.