no code implementations • 3 Jan 2022 • Mingfu Xue, Xin Wang, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu
After training, the backdoor attack against DNN is robust to image compression.
no code implementations • 15 Jun 2021 • Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu
Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack.
no code implementations • 19 Apr 2021 • Shichang Sun, Mingfu Xue, Jian Wang, Weiqiang Liu
To address these challenges, in this paper, we propose a method to protect the intellectual properties of DNN models by using an additional class and steganographic images.
no code implementations • 15 Apr 2021 • Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu
In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.
no code implementations • 2 Mar 2021 • Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu
For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.
no code implementations • 27 Nov 2020 • Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu
After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.