Search Results for author: Christian Schlarmann

Found 2 papers, 2 papers with code

On the Adversarial Robustness of Multi-Modal Foundation Models

1 code implementation21 Aug 2023 Christian Schlarmann, Matthias Hein

In this paper we show that imperceivable attacks on images in order to change the caption output of a multi-modal foundation model can be used by malicious content providers to harm honest users e. g. by guiding them to malicious websites or broadcast fake information.

Adversarial Attack Adversarial Robustness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.