Search Results for author: Jinqi Luo

Found 7 papers, 0 papers with code

Knowledge Pursuit Prompting for Zero-Shot Multimodal Synthesis

no code implementations29 Nov 2023 Jinqi Luo, Kwan Ho Ryan Chan, Dimitris Dimos, René Vidal

To address this question, we propose Knowledge Pursuit Prompting (KPP), a zero-shot framework that iteratively incorporates external knowledge to help generators produce reliable visual content.

Language Modelling

Zero-shot Model Diagnosis

no code implementations CVPR 2023 Jinqi Luo, Zhaoning Wang, Chen Henry Wu, Dong Huang, Fernando de la Torre

Extensive experiments demonstrate that our method is capable of producing counterfactual images and offering sensitivity analysis for model diagnosis without the need for a test set.

counterfactual Fairness

Semantic Image Attack for Visual Model Diagnosis

no code implementations23 Mar 2023 Jinqi Luo, Zhaoning Wang, Chen Henry Wu, Dong Huang, Fernando de la Torre

Rather than relying on a carefully designed test set to assess ML models' failures, fairness, or robustness, this paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images to allow model diagnosis, interpretability, and robustness.

Adversarial Attack Attribute +2

Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

no code implementations29 Jun 2021 Tao Bai, Jinqi Luo, Jun Zhao

The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities.

Recent Advances in Adversarial Training for Adversarial Robustness

no code implementations2 Feb 2021 Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang

Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.

Adversarial Robustness

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks

no code implementations3 Nov 2020 Tao Bai, Jinqi Luo, Jun Zhao

Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).

Adversarial Robustness

Generating Adversarial yet Inconspicuous Patches with a Single Image

no code implementations21 Sep 2020 Jinqi Luo, Tao Bai, Jun Zhao

Through extensive experiments, our ap-proach shows strong attacking ability in both the white-box and black-box setting.

Cannot find the paper you are looking for? You can Submit a new open access paper.