1 code implementation • ICCV 2023 • Guanhao Gan, Yiming Li, Dongxian Wu, Shu-Tao Xia
To protect the copyright of DNNs, backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model by embedding a specific backdoor behavior before releasing it.
no code implementations • 21 Jun 2023 • Cheng Yang, Xue Yang, Dongxian Wu, Xiaohu Tang
Then the server aggregates all the proxy datasets to form a central dummy dataset, which is used to finetune aggregated global model.
1 code implementation • 14 Oct 2022 • Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang
We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.
no code implementations • 22 Feb 2022 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.
2 code implementations • NeurIPS 2021 • Dongxian Wu, Yisen Wang
As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular.
no code implementations • 29 Sep 2021 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.
1 code implementation • 18 Sep 2021 • Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, Shu-Tao Xia
To this end, we propose the confusing perturbations-induced backdoor attack (CIBA).
no code implementations • ICML Workshop AML 2021 • Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia
We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.
no code implementations • 1 Jul 2020 • Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
2 code implementations • ECCV 2020 • Jiawang Bai, Bin Chen, Yiming Li, Dongxian Wu, Weiwei Guo, Shu-Tao Xia, En-hui Yang
In this paper, we propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
3 code implementations • NeurIPS 2020 • Dongxian Wu, Shu-Tao Xia, Yisen Wang
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.
no code implementations • 26 Mar 2020 • Xianbin Lv, Dongxian Wu, Shu-Tao Xia
Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach.
3 code implementations • ICLR 2020 • Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.