Search Results for author: Shuai Zhong

Found 1 papers, 1 papers with code

ImgTrojan: Jailbreaking Vision-Language Models with ONE Image

1 code implementation5 Mar 2024 Xijia Tao, Shuai Zhong, Lei LI, Qi Liu, Lingpeng Kong

In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions.

Cannot find the paper you are looking for? You can Submit a new open access paper.