no code implementations • 4 Dec 2023 • Xunguang Wang, Zhenlan Ji, Pingchuan Ma, Zongjie Li, Shuai Wang
Initially, we utilize a public text-to-image generative model to "reverse" the target response into a target image, and employ GPT-4 to infer a reasonable instruction $\boldsymbol{p}^\prime$ from the target response.
1 code implementation • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 • Xu Yuan, Zheng Zhang, Xunguang Wang, Lin Wu
Further, we, for the first time, formulate the formalized adversarial training of deep hashing into a unified minimax optimization under the guidance of the generated mainstay codes.
no code implementations • 22 Mar 2023 • Xunguang Wang, Jiawang Bai, Xinyue Xu, Xiaomeng Li
Deep hashing has been extensively applied to massive image retrieval due to its efficiency and effectiveness.
1 code implementation • 18 Apr 2022 • Xunguang Wang, Yiqun Lin, Xiaomeng Li
On the one hand, CgAT generates the worst adversarial examples as augmented data by maximizing the Hamming distance between the hash codes of the adversarial examples and the center codes.
1 code implementation • CVPR 2021 • Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu
However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field.
no code implementations • 15 May 2020 • Xunguang Wang, Ship Peng Xu, Eric Ke Wang
Recent developments in the filed of Deep Learning have demonstrated that Deep Neural Networks(DNNs) are vulnerable to adversarial examples.