Search Results for author: Zihao Tan

Found 2 papers, 0 papers with code

TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT4

no code implementations29 Nov 2023 Zihao Tan, Qingliang Chen, Yongjian Huang, Chen Liang

Most of the existing attack methods focus on inserting manually predefined templates as triggers in the pre-training phase to train the victim model and utilize the same triggers in the downstream task to perform inference, which tends to ignore the transferability and stealthiness of the templates.

Backdoor Attack

COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in Language Models

no code implementations9 Jun 2023 Zihao Tan, Qingliang Chen, Wenbin Zhu, Yongjian Huang

Prompt-based learning has been proved to be an effective way in pre-trained language models (PLMs), especially in low-resource scenarios like few-shot settings.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.