1 code implementation • 6 May 2024 • Xin Ding, Yongwei Wang, Kao Zhang, Z. Jane Wang
In this paper, we introduce Continuous Conditional Diffusion Models (CCDMs), the first CDM designed specifically for the CCGM task.
no code implementations • 18 Mar 2024 • Jingke Zhao, Zan Wang, Yongwei Wang, Lanjun Wang
Backdoor attacks have been shown to impose severe threats to real security-critical scenarios.
no code implementations • 18 Jan 2024 • He Zhao, Zhiwei Zeng, Yongwei Wang, Deheng Ye, Chunyan Miao
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce, where resilience against adversarial attacks is crucial.
1 code implementation • 20 Aug 2023 • Xin Ding, Yongwei Wang, Zuheng Xu
Although Negative Data Augmentation (NDA) effectively enhances unconditional and class-conditional GANs by introducing anomalies into real training images, guiding the GANs away from low-quality outputs, its impact on CcGANs is limited, as it fails to replicate negative samples that may occur during the CcGAN sampling.
no code implementations • 8 Dec 2022 • Minyang Jiang, Yongwei Wang, Martin J. McKeown, Z. Jane Wang
Bypassing the occlusion reconstruction step, our model efficiently extracts FAU features of occluded faces by mining the latent space of a pretrained masked autoencoder.
1 code implementation • 12 Sep 2022 • Zheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Beng Chin Ooi, Fei Wu
DUET is deployed on a powerful cloud server that only requires the low cost of forwarding propagation and low time delay of data transmission between the device and the cloud.
no code implementations • 22 Aug 2022 • Yongwei Wang, Yuan Li, Zhiqi Shen, Yuhui Qiao
Crucially, to further reverse adversarial noises and suppress redundant injected noises, a novel multiscale denoising mechanism is carefully designed that aggregates image information from neighboring scales.
no code implementations • 21 Aug 2022 • Yongwei Wang, Yong liu, Zhiqi Shen
However, there still lack efforts to evaluate the robustness of such CF systems in deployment.
1 code implementation • 22 Mar 2022 • Yongwei Wang, Yuheng Wang, Tim K. Lee, Chunyan Miao, Z. Jane Wang
In this case, knowledge distillation (KD) has been proven as an efficient tool to help improve the adaptability of lightweight models under limited resources, meanwhile keeping a high-level representation capability.
no code implementations • 31 Jul 2021 • Li Ding, Yongwei Wang, Xin Ding, Kaiwen Yuan, Ping Wang, Hua Huang, Z. Jane Wang
Deep learning based image classification models are shown vulnerable to adversarial attacks by injecting deliberately crafted noises to clean images.
2 code implementations • 7 Apr 2021 • Xin Ding, Yongwei Wang, Zuheng Xu, Z. Jane Wang, William J. Welch
Knowledge distillation (KD) has been actively studied for image classification tasks in deep learning, aiming to improve the performance of a student based on the knowledge from a teacher.
1 code implementation • 20 Mar 2021 • Xin Ding, Yongwei Wang, Z. Jane Wang, William J. Welch
When sampling from CcGANs, the superiority of cDR-RS is even more noticeable in terms of both effectiveness and efficiency.
Ranked #1 on Image Generation on RC-49
1 code implementation • ICLR 2021 • Xin Ding, Yongwei Wang, Zuheng Xu, William J. Welch, Z. Jane Wang
This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels).
Ranked #2 on Image Generation on RC-49
no code implementations • 30 Oct 2020 • Yongwei Wang, Mingquan Feng, Rabab Ward, Z. Jane Wang, Lanjun Wang
White-box adversarial attacks can fool neural networks with small adversarial perturbations, especially for large size images.
1 code implementation • 29 Oct 2020 • Yongwei Wang, Xin Ding, Li Ding, Rabab Ward, Z. Jane Wang
Specially, when adversaries consider imperceptibility as a constraint, the proposed anti-forensic method can improve the average attack success rate by around 30\% on fake face images over two baseline attacks.
no code implementations • 29 Jul 2019 • Chen He, Kan Ming, Yongwei Wang, Z. Jane Wang
In this letter, as a proof of concept, we propose a deep learning-based approach to attack the chaos-based image encryption algorithm in \cite{guan2005chaos}.