Search Results for author: Dongxian Wu

Found 13 papers, 7 papers with code

Towards Robust Model Watermark via Reducing Parametric Vulnerability

1 code implementation ICCV 2023 Guanhao Gan, Yiming Li, Dongxian Wu, Shu-Tao Xia

To protect the copyright of DNNs, backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model by embedding a specific backdoor behavior before releasing it.

An Efficient Virtual Data Generation Method for Reducing Communication in Federated Learning

no code implementations21 Jun 2023 Cheng Yang, Xue Yang, Dongxian Wu, Xiaohu Tang

Then the server aggregates all the proxy datasets to form a central dummy dataset, which is used to finetune aggregated global model.

Federated Learning

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

1 code implementation14 Oct 2022 Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang

We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.

Adversarial Robustness

On the Effectiveness of Adversarial Training against Backdoor Attacks

no code implementations22 Feb 2022 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.

Adversarial Neuron Pruning Purifies Backdoored Deep Models

2 code implementations NeurIPS 2021 Dongxian Wu, Yisen Wang

As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular.

Does Adversarial Robustness Really Imply Backdoor Vulnerability?

no code implementations29 Sep 2021 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.

Adversarial Robustness

Universal Adversarial Head: Practical Protection against Video Data Leakage

no code implementations ICML Workshop AML 2021 Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia

We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.

Deep Hashing Video Retrieval

Temporal Calibrated Regularization for Robust Noisy Label Learning

no code implementations1 Jul 2020 Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia

Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.

Targeted Attack for Deep Hashing based Retrieval

2 code implementations ECCV 2020 Jiawang Bai, Bin Chen, Yiming Li, Dongxian Wu, Weiwei Guo, Shu-Tao Xia, En-hui Yang

In this paper, we propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.

Deep Hashing Image Retrieval +1

Adversarial Weight Perturbation Helps Robust Generalization

3 code implementations NeurIPS 2020 Dongxian Wu, Shu-Tao Xia, Yisen Wang

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.

Adversarial Robustness

Matrix Smoothing: A Regularization for DNN with Transition Matrix under Noisy Labels

no code implementations26 Mar 2020 Xianbin Lv, Dongxian Wu, Shu-Tao Xia

Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach.

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

3 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Cannot find the paper you are looking for? You can Submit a new open access paper.