Search Results for author: Zhan Qin

Found 20 papers, 9 papers with code

LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation

no code implementations16 Jan 2024 Zhixuan Chu, Yan Wang, Qing Cui, Longfei Li, Wenqing Chen, Zhan Qin, Kui Ren

As personalized recommendation systems become vital in the age of information overload, traditional methods relying solely on historical user interactions often fail to fully capture the multifaceted nature of human interests.

Explainable Recommendation Recommendation Systems

Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

no code implementations NeurIPS 2023 Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

In addition, our rates of generalization and deletion capacity match the state-of-the-art rates derived previously for standard statistical learning models.

Machine Unlearning

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

no code implementations3 Dec 2023 Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin

We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.

Attribute Backdoor Attack

ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

no code implementations3 Nov 2023 Yuke Hu, Jian Lou, Jiaqi Liu, Wangze Ni, Feng Lin, Zhan Qin, Kui Ren

However, despite their promising efficiency, almost all existing machine unlearning methods handle unlearning requests independently from inference requests, which unfortunately introduces a new security issue of inference service obsolescence and a privacy vulnerability of undesirable exposure for machine unlearning in MLaaS.

Machine Unlearning

Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey

1 code implementation27 Oct 2023 Xinyu She, Yue Liu, Yanjie Zhao, Yiling He, Li Li, Chakkrit Tantithamthavorn, Zhan Qin, Haoyu Wang

After carefully examining these studies, we designed a taxonomy of pitfalls in LM4Code research and conducted a systematic study to summarize the issues, implications, current solutions, and challenges of different pitfalls for LM4Code systems.

Code Generation

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

1 code implementation19 Oct 2023 Hongwei Yao, Jian Lou, Zhan Qin

Prompts have significantly improved the performance of pretrained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios.

Backdoor Attack

SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution

1 code implementation25 Sep 2023 Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren

Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios.

RemovalNet: DNN Fingerprint Removal Attacks

1 code implementation23 Aug 2023 Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren

After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient.

Bilevel Optimization

FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

1 code implementation10 Aug 2023 Yiling He, Jian Lou, Zhan Qin, Kui Ren

Although feature attribution (FA) methods can be used to explain deep learning, the underlying classifier is still blind to what behavior is suspicious, and the generated explanation cannot adapt to downstream tasks, incurring poor explanation fidelity and intelligibility.

Malware Analysis Multi-Task Learning

Masked Diffusion Models Are Fast Distribution Learners

1 code implementation20 Jun 2023 Jiachen Lei, Qinglong Wang, Peng Cheng, Zhongjie Ba, Zhan Qin, Zhibo Wang, Zhenguang Liu, Kui Ren

In the pre-training stage, we propose to mask a high proportion (e. g., up to 90\%) of input images to approximately represent the primer distribution and introduce a masked denoising score matching objective to train a model to denoise visible areas.

Denoising Image Generation

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

no code implementations20 Jun 2023 Hongwei Yao, Zheng Li, Haiqin Weng, Feng Xue, Kui Ren, Zhan Qin

FDINET exhibits the capability to identify colluding adversaries with an accuracy exceeding 91%.

Model extraction

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

no code implementations6 Apr 2023 Yuke Hu, Wei Liang, Ruofan Wu, Kai Xiao, Weiqiang Wang, Xiaochen Li, Jinfei Liu, Zhan Qin

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts expressive representation from knowledge graph (KG) to facilitate diverse downstream tasks.

Knowledge Graph Embedding

MUter: Machine Unlearning on Adversarially Trained Models

no code implementations ICCV 2023 Junxu Liu, Mingsheng Xue, Jian Lou, XiaoYu Zhang, Li Xiong, Zhan Qin

However, existing methods focus exclusively on unlearning from standard training models and do not apply to adversarial training models (ATMs) despite their popularity as effective defenses against adversarial examples.

Machine Unlearning

FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model

no code implementations14 Nov 2022 Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren

To deter such misbehavior, it is essential to establish a mechanism for verifying the ownership of the model and as well tracing its origin to the leaker among the FL participants.

Continual Learning Federated Learning

OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization

1 code implementation4 Oct 2022 Xiaochen Li, Yuke Hu, Weiran Liu, Hanwen Feng, Li Peng, Yuan Hong, Kui Ren, Zhan Qin

Although the solution based on Local Differential Privacy (LDP) addresses the above problems, it leads to the low accuracy of the trained model.

Privacy Preserving Vertical Federated Learning

Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training

no code implementations5 Jun 2022 Guodong Cao, Zhibo Wang, Xiaowei Dong, Zhifei Zhang, Hengchang Guo, Zhan Qin, Kui Ren

However, most existing works are still trapped in the dilemma between higher accuracy and stronger robustness since they tend to fit a model towards robust features (not easily tampered with by adversaries) while ignoring those non-robust but highly predictive features.

Knowledge Distillation

Backdoor Defense via Decoupling the Training Process

2 code implementations ICLR 2022 Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, Kui Ren

Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples.

backdoor defense Self-Supervised Learning

Feature Importance-aware Transferable Adversarial Attacks

3 code implementations ICCV 2021 Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren

More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.

Feature Importance

Towards Differentially Private Truth Discovery for Crowd Sensing Systems

no code implementations10 Oct 2018 Yaliang Li, Houping Xiao, Zhan Qin, Chenglin Miao, Lu Su, Jing Gao, Kui Ren, Bolin Ding

To better utilize sensory data, the problem of truth discovery, whose goal is to estimate user quality and infer reliable aggregated results through quality-aware data aggregation, has emerged as a hot topic.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.