no code implementations • 16 Apr 2024 • Qi Guo, Shanmin Pang, Xiaojun Jia, Qing Guo
Specifically, AdvDiffVLM employs Adaptive Ensemble Gradient Estimation to modify the score during the diffusion model's reverse generation process, ensuring the adversarial examples produced contain natural adversarial semantics and thus possess enhanced transferability.
no code implementations • 19 Mar 2024 • Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, Qing Guo
Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs).
1 code implementation • 8 Mar 2024 • Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao
We find that concealing deformation perturbations in areas insensitive to human eyes can achieve a better trade-off between imperceptibility and adversarial strength, specifically in parts of the object surface that are complex and exhibit drastic curvature changes.
1 code implementation • 18 Feb 2024 • Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao
However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
no code implementations • 5 Feb 2024 • Yihao Huang, Kaiyuan Yu, Qing Guo, Felix Juefei-Xu, Xiaojun Jia, Tianlin Li, Geguang Pu, Yang Liu
In recent years, LiDAR-camera fusion models have markedly advanced 3D object detection tasks in autonomous driving.
1 code implementation • 2 Feb 2024 • Dingcheng Yang, Yang Bai, Xiaojun Jia, Yang Liu, Xiaochun Cao, Wenjian Yu
The MMP-Attack shows a notable advantage over existing works with superior universality and transferability, which can effectively attack commercial text-to-image (T2I) models such as DALL-E 3.
no code implementations • 31 Dec 2023 • Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao
However, in this paper, we propose the Few-shot Learning Backdoor Attack (FLBA) to show that FSL can still be vulnerable to backdoor attacks.
no code implementations • 8 Dec 2023 • Bangyan He, Xiaojun Jia, Siyuan Liang, Tianrui Lou, Yang Liu, Xiaochun Cao
Current Visual-Language Pre-training (VLP) models are vulnerable to adversarial examples.
no code implementations • 7 Dec 2023 • Dongchen Han, Xiaojun Jia, Yang Bai, Jindong Gu, Yang Liu, Xiaochun Cao
Investigating the generation of high-transferability adversarial examples is crucial for uncovering VLP models' vulnerabilities in practical scenarios.
no code implementations • 3 Dec 2023 • Xiaojun Jia, Jindong Gu, Yihao Huang, Simeng Qin, Qing Guo, Yang Liu, Xiaochun Cao
At the second stage, the pixels are divided into different branches based on their transferable property which is dependent on Kullback-Leibler divergence.
1 code implementation • 26 Oct 2023 • Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqain Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr
This survey explores the landscape of the adversarial transferability of adversarial examples.
1 code implementation • 24 Oct 2023 • Xiaojun Jia, Jianshu Li, Jindong Gu, Yang Bai, Xiaochun Cao
Besides, we provide theoretical analysis to show the model robustness can be improved by the single-step adversarial training with sampled subnetworks.
no code implementations • 22 Aug 2023 • Xiaojun Jia, Yuefeng Chen, Xiaofeng Mao, Ranjie Duan, Jindong Gu, Rong Zhang, Hui Xue, Xiaochun Cao
In this paper, we conduct a comprehensive study of over 10 fast adversarial training methods in terms of adversarial robustness and training costs.
no code implementations • 24 Jul 2023 • Gege Qi, Yuefeng Chen, Xiaofeng Mao, Xiaojun Jia, Ranjie Duan, Rong Zhang, Hui Xue
Developing a practically-robust automatic speech recognition (ASR) is challenging since the model should not only maintain the original performance on clean samples, but also achieve consistent efficacy under small volume perturbations and large domain shifts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 1 Apr 2023 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
This initialization is generated by using high-quality adversarial perturbations from the historical training process.
no code implementations • 29 Nov 2022 • Xiaofeng Mao, Yuefeng Chen, Xiaojun Jia, Rong Zhang, Hui Xue, Zhao Li
Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]".
Ranked #1 on Domain Generalization on VLCS
no code implementations • 16 Sep 2022 • Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao
Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.
1 code implementation • 4 Aug 2022 • Yiming Li, Linghui Zhu, Xiaojun Jia, Yang Bai, Yong Jiang, Shu-Tao Xia, Xiaochun Cao
In general, we conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
1 code implementation • 18 Jul 2022 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.
Adversarial Attack Adversarial Attack on Video Classification
1 code implementation • 17 Jul 2022 • Xinwei Liu, Jian Liu, Yang Bai, Jindong Gu, Tao Chen, Xiaojun Jia, Xiaochun Cao
Inspired by the vulnerability of DNNs on adversarial perturbations, we propose a novel defence mechanism by adversarial machine learning for good.
1 code implementation • CVPR 2022 • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.
1 code implementation • ICML Workshop AML 2021 • Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, Xiaochun Cao
In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}.
no code implementations • 11 Oct 2021 • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao
Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training.
2 code implementations • 1 Aug 2021 • Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang
Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.
no code implementations • 11 Sep 2019 • Xiaojun Jia, Xingxing Wei, Xiaochun Cao
We propose the temporal defense, which reconstructs the polluted frames with their temporally neighbor clean frames, to deal with the adversarial videos with sparse polluted frames.
1 code implementation • CVPR 2019 • Xiaojun Jia, Xingxing Wei, Xiaochun Cao, Hassan Foroosh
In other words, ComDefend can transform the adversarial image to its clean version, which is then fed to the trained classifier.