Search Results for author: Guowen Xu

Found 13 papers, 3 papers with code

CLAD: Robust Audio Deepfake Detection Against Manipulation Attacks with Contrastive Learning

1 code implementation24 Apr 2024 Haolin Wu, Jing Chen, Ruiying Du, Cong Wu, Kun He, Xingcan Shang, Hao Ren, Guowen Xu

The detection models exhibited vulnerabilities, with FAR rising to 36. 69%, 31. 23%, and 51. 28% under volume control, fading, and noise injection, respectively.

Contrastive Learning DeepFake Detection +1

SmartCooper: Vehicular Collaborative Perception with Adaptive Fusion and Judger Mechanism

no code implementations1 Feb 2024 Yuang Zhang, Haonan An, Zhengru Fang, Guowen Xu, Yuan Zhou, Xianhao Chen, Yuguang Fang

Moreover, in the context of collaborative perception, it is important to recognize that not all CAVs contribute valuable data, and some CAV data even have detrimental effects on collaborative perception.

Autonomous Driving

Adaptive Communications in Collaborative Perception with Domain Alignment for Autonomous Driving

no code implementations15 Sep 2023 Senkang Hu, Zhengru Fang, Haonan An, Guowen Xu, Yuan Zhou, Xianhao Chen, Yuguang Fang

To address these issues, we propose ACC-DA, a channel-aware collaborative perception framework to dynamically adjust the communication graph and minimize the average transmission delay while mitigating the side effects from the data heterogeneity.

Autonomous Driving

Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator

no code implementations2 Aug 2023 Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang

One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details.

Model extraction

Alleviating the Effect of Data Imbalance on Adversarial Training

1 code implementation14 Jul 2023 Guanlin Li, Guowen Xu, Tianwei Zhang

This framework consists of two components: (1) a new training strategy inspired by the effective number to guide the model to generate more balanced and informative AEs; (2) a carefully constructed penalty function to force a satisfactory feature space.

Color Backdoor: A Robust Poisoning Attack in Color Space

no code implementations CVPR 2023 Wenbo Jiang, Hongwei Li, Guowen Xu, Tianwei Zhang

To make the trigger more imperceptible and human-unnoticeable, a variety of stealthy backdoor attacks have been proposed, some works employ imperceptible perturbations as the backdoor triggers, which restrict the pixel differences of the triggered image and clean image.

Backdoor Attack SSIM

A Benchmark of Long-tailed Instance Segmentation with Noisy Labels

1 code implementation24 Nov 2022 Guanlin Li, Guowen Xu, Tianwei Zhang

In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise, i. e., some of the annotations are incorrect.

Instance Segmentation Segmentation +1

ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less Neural Networks

no code implementations7 Apr 2022 Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei Zhang

Multiplication-less neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations.

Neural Architecture Search

Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving

no code implementations2 Mar 2022 Xingshuo Han, Guowen Xu, Yuan Zhou, Xuehuan Yang, Jiwei Li, Tianwei Zhang

However, DNN models are vulnerable to different types of adversarial attacks, which pose significant risks to the security and safety of the vehicles and passengers.

Autonomous Driving Backdoor Attack +1

Towards Robust Point Cloud Models with Context-Consistency Network and Adaptive Augmentation

no code implementations29 Sep 2021 Guanlin Li, Guowen Xu, Han Qiu, Ruan He, Jiwei Li, Tianwei Zhang

Extensive evaluations indicate the integration of the two techniques provides much more robustness than existing defense solutions for 3D models.

Data Augmentation

Practical and Private Heterogeneous Federated Learning

no code implementations29 Sep 2021 Hanxiao Chen, Meng Hao, Hongwei Li, Guangxiao Niu, Guowen Xu, Huawei Wang, Yuan Zhang, Tianwei Zhang

Heterogeneous federated learning (HFL) enables clients with different computation/communication capabilities to collaboratively train their own customized models, in which the knowledge of models is shared via clients' predictions on a public dataset.

Federated Learning Privacy Preserving

Fingerprinting Generative Adversarial Networks

no code implementations19 Jun 2021 Guanlin Li, Guowen Xu, Han Qiu, Shangwei Guo, Run Wang, Jiwei Li, Tianwei Zhang, Rongxing Lu

In this paper, we present the first fingerprinting scheme for the Intellectual Property (IP) protection of GANs.

Topology-aware Differential Privacy for Decentralized Image Classification

no code implementations14 Jun 2020 Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu

In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.

Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.