Search Results for author: Haodong Ren

Found 1 papers, 1 papers with code

Jailbreaking Attack against Multimodal Large Language Model

1 code implementation4 Feb 2024 Zhenxing Niu, Haodong Ren, Xinbo Gao, Gang Hua, Rong Jin

This paper focuses on jailbreaking attacks against multi-modal large language models (MLLMs), seeking to elicit MLLMs to generate objectionable responses to harmful user queries.

Language Modelling Large Language Model

Cannot find the paper you are looking for? You can Submit a new open access paper.