Search Results for author: Wenbo Jiang

Found 3 papers, 0 papers with code

Talk Too Much: Poisoning Large Language Models under Token Limit

no code implementations23 Apr 2024 Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li

To enhance the stealthiness of the trigger, we present a poisoning attack against LLMs that is triggered by a generation/output condition-token limitation, which is a commonly adopted strategy by users for reducing costs.

Human Detection

Color Backdoor: A Robust Poisoning Attack in Color Space

no code implementations CVPR 2023 Wenbo Jiang, Hongwei Li, Guowen Xu, Tianwei Zhang

To make the trigger more imperceptible and human-unnoticeable, a variety of stealthy backdoor attacks have been proposed, some works employ imperceptible perturbations as the backdoor triggers, which restrict the pixel differences of the triggered image and clean image.

Backdoor Attack SSIM

Cannot find the paper you are looking for? You can Submit a new open access paper.