Search Results for author: Hongwei Yao

Found 3 papers, 2 papers with code

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

1 code implementation19 Oct 2023 Hongwei Yao, Jian Lou, Zhan Qin

Prompts have significantly improved the performance of pretrained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios.

Backdoor Attack

RemovalNet: DNN Fingerprint Removal Attacks

1 code implementation23 Aug 2023 Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren

After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient.

Bilevel Optimization

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

no code implementations20 Jun 2023 Hongwei Yao, Zheng Li, Haiqin Weng, Feng Xue, Kui Ren, Zhan Qin

FDINET exhibits the capability to identify colluding adversaries with an accuracy exceeding 91%.

Model extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.