Search Results for author: Haixu Tang

Found 11 papers, 3 papers with code

DPAdapter: Improving Differentially Private Deep Learning through Noise Tolerance Pre-training

no code implementations5 Mar 2024 ZiHao Wang, Rui Zhu, Dongruo Zhou, Zhikun Zhang, John Mitchell, Haixu Tang, XiaoFeng Wang

DPAdapter modifies and enhances the sharpness-aware minimization (SAM) technique, utilizing a two-batch strategy to provide a more accurate perturbation estimate and an efficient gradient descent, thereby improving parameter robustness against noise.

The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks

no code implementations24 Oct 2023 Xiaoyi Chen, Siyuan Tang, Rui Zhu, Shijun Yan, Lei Jin, ZiHao Wang, Liya Su, XiaoFeng Wang, Haixu Tang

In the attack, one can construct a PII association task, whereby an LLM is fine-tuned using a minuscule PII dataset, to potentially reinstate and reveal concealed PIIs.

Large Language Model Soft Ideologization via AI-Self-Consciousness

no code implementations28 Sep 2023 Xiaotian Zhou, Qian Wang, XiaoFeng Wang, Haixu Tang, Xiaozhong Liu

Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks.

Language Modelling Large Language Model

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

no code implementations29 Jan 2023 Rui Zhu, Di Tang, Siyuan Tang, Guanhong Tao, Shiqing Ma, XiaoFeng Wang, Haixu Tang

Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks against the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.

Backdoor Attack

Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

no code implementations9 Dec 2022 Rui Zhu, Di Tang, Siyuan Tang, XiaoFeng Wang, Haixu Tang

Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data.

Continual Learning

Understanding Impacts of Task Similarity on Backdoor Attack and Detection

no code implementations12 Oct 2022 Di Tang, Rui Zhu, XiaoFeng Wang, Haixu Tang, Yi Chen

With extensive studies on backdoor attack and detection, still fundamental questions are left unanswered regarding the limits in the adversary's capability to attack and the defender's capability to detect.

Backdoor Attack Multi-Task Learning

PepNet: A Fully Convolutional Neural Network for De novo Peptide Sequencing

1 code implementation ResearchSquare 2022 Kaiyuan Liu, Yuzhen Ye, Haixu Tang

The de novo peptide sequencing, which does not rely on a comprehensive target sequence database, provided us a way to identify novel peptides from tandem mass (MS/MS) spectra.

de novo peptide sequencing

Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations

no code implementations14 Jul 2020 Rui Zhu, Bo Lin, Haixu Tang

In this paper, we present the first method to estimate the upper bound of the number of linear regions in any sphere in the input space of a given ReLU neural network.

Towards Fair Cross-Domain Adaptation via Generative Learning

no code implementations4 Mar 2020 Tongxin Wang, Zhengming Ding, Wei Shao, Haixu Tang, Kun Huang

Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.

Domain Adaptation domain classification +1

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

1 code implementation2 Aug 2019 Di Tang, Xiao-Feng Wang, Haixu Tang, Kehuan Zhang

A security threat to deep neural networks (DNN) is backdoor contamination, in which an adversary poisons the training data of a target model to inject a Trojan so that images carrying a specific trigger will always be classified into a specific label.

Cryptography and Security

Understanding Membership Inferences on Well-Generalized Learning Models

1 code implementation13 Feb 2018 Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiao-Feng Wang, Haixu Tang, Carl A. Gunter, Kai Chen

Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.

BIG-bench Machine Learning Inference Attack +1

Cannot find the paper you are looking for? You can Submit a new open access paper.