1 code implementation • 5 Mar 2024 • Xijia Tao, Shuai Zhong, Lei LI, Qi Liu, Lingpeng Kong
In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions.
1 code implementation • 11 Jun 2023 • Jiacheng Ye, Xijia Tao, Lingpeng Kong
First, does multilingual transfer ability exist in English-centric models and how does it compare with multilingual pretrained models?