Search Results for author: Peizhuo Lv

Found 8 papers, 4 papers with code

MEA-Defender: A Robust Watermark against Model Extraction Attack

1 code implementation26 Jan 2024 Peizhuo Lv, Hualong Ma, Kai Chen, Jiachen Zhou, Shengzhi Zhang, Ruigang Liang, Shenchen Zhu, Pan Li, Yingjun Zhang

To protect the Intellectual Property (IP) of the original owners over such DNN models, backdoor-based watermarks have been extensively studied.

Model extraction Self-Supervised Learning

DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models

1 code implementation18 Dec 2023 Jiachen Zhou, Peizhuo Lv, Yibing Lan, Guozhu Meng, Kai Chen, Hualong Ma

Dataset sanitization is a widely adopted proactive defense against poisoning-based backdoor attacks, aimed at filtering out and removing poisoned samples from training datasets.

A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information

no code implementations17 Oct 2022 Pan Li, Peizhuo Lv, Shenchen Zhu, Ruigang Liang, Kai Chen

Although traditional static DNNs are vulnerable to the membership inference attack (MIA) , which aims to infer whether a particular point was used to train the model, little is known about how such an attack performs on the dynamic NNs.

Computational Efficiency Image Classification +2

SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning

1 code implementation8 Sep 2022 Peizhuo Lv, Pan Li, Shenchen Zhu, Shengzhi Zhang, Kai Chen, Ruigang Liang, Chang Yue, Fan Xiang, Yuling Cai, Hualong Ma, Yingjun Zhang, Guozhu Meng

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains.

Self-Supervised Learning

Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain

no code implementations9 Jul 2022 Chang Yue, Peizhuo Lv, Ruigang Liang, Kai Chen

However, most of the triggers used in the current study are fixed patterns patched on a small fraction of an image and are often clearly mislabeled, which is easily detected by humans or defense methods such as Neural Cleanse and SentiNet.

Backdoor Attack Data Poisoning +1

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

1 code implementation22 Nov 2021 Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, Yunfei Yang

In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset.

Backdoor Attack Image Classification +1

HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks

no code implementations25 Mar 2021 Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Yue Zhao, Yingjiu Li

Most existing solutions embed backdoors in DNN model training such that DNN ownership can be verified by triggering distinguishable model behaviors with a set of secret inputs.

Efficient Computation of Quantized Neural Networks by {−1, +1} Encoding Decomposition

no code implementations8 Oct 2018 Qigong Sun, Fanhua Shang, Xiufang Li, Kang Yang, Peizhuo Lv, Licheng Jiao

Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability.

Image Classification Model Compression +2

Cannot find the paper you are looking for? You can Submit a new open access paper.