no code implementations • 13 Jan 2024 • Junxi Chen, Junhao Dong, Xiaohua Xie
Recently, many studies utilized adversarial examples (AEs) to raise the cost of malicious image editing and copyright violation powered by latent diffusion models (LDMs).
no code implementations • 16 May 2023 • Junxi Chen, Junhao Dong, Xiaohua Xie
However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i. i. d.
no code implementations • 24 Mar 2023 • Junhao Dong, Junxi Chen, Xiaohua Xie, JianHuang Lai, Hao Chen
In this exposition, we present a comprehensive survey on recent advances in adversarial attack and defense for medical image analysis with a novel taxonomy in terms of the application scenario.