Search Results for author: Erfan Shayegani

Found 2 papers, 0 papers with code

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

no code implementations16 Oct 2023 Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, Nael Abu-Ghazaleh

Large Language Models (LLMs) are swiftly advancing in architecture and capability, and as they integrate more deeply into complex systems, the urgency to scrutinize their security properties grows.

Adversarial Attack Federated Learning

Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models

no code implementations26 Jul 2023 Erfan Shayegani, Yue Dong, Nael Abu-Ghazaleh

Specifically, we develop cross-modality attacks on alignment where we pair adversarial images going through the vision encoder with textual prompts to break the alignment of the language model.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.