no code implementations • 29 Nov 2023 • Jinqi Luo, Kwan Ho Ryan Chan, Dimitris Dimos, René Vidal
To address this question, we propose Knowledge Pursuit Prompting (KPP), a zero-shot framework that iteratively incorporates external knowledge to help generators produce reliable visual content.
no code implementations • CVPR 2023 • Jinqi Luo, Zhaoning Wang, Chen Henry Wu, Dong Huang, Fernando de la Torre
Extensive experiments demonstrate that our method is capable of producing counterfactual images and offering sensitivity analysis for model diagnosis without the need for a test set.
no code implementations • 23 Mar 2023 • Jinqi Luo, Zhaoning Wang, Chen Henry Wu, Dong Huang, Fernando de la Torre
Rather than relying on a carefully designed test set to assess ML models' failures, fairness, or robustness, this paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images to allow model diagnosis, interpretability, and robustness.
no code implementations • 29 Jun 2021 • Tao Bai, Jinqi Luo, Jun Zhao
The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities.
no code implementations • 2 Feb 2021 • Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang
Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.
no code implementations • 3 Nov 2020 • Tao Bai, Jinqi Luo, Jun Zhao
Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).
no code implementations • 21 Sep 2020 • Jinqi Luo, Tao Bai, Jun Zhao
Through extensive experiments, our ap-proach shows strong attacking ability in both the white-box and black-box setting.