Search Results for author: Zhengming Zhang

Found 8 papers, 2 papers with code

Teach LLMs to Phish: Stealing Private Information from Language Models

no code implementations1 Mar 2024 Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal

When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information.

Risk assessment and mitigation of e-scooter crashes with naturalistic driving data

no code implementations24 Dec 2022 Avinash Prabu, Zhengming Zhang, Renran Tian, Stanley Chien, Lingxi Li, Yaobin Chen, Rini Sherony

The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling, baseline behavior modeling, and the potential future development of in-vehicle mitigation algorithms.

Descriptive

Neurotoxin: Durable Backdoors in Federated Learning

2 code implementations12 Jun 2022 Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, Prateek Mittal

In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs.

Backdoor Attack Federated Learning +1

Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning

no code implementations29 Mar 2021 Xiangyu Zhang, Zhengming Zhang, Luxi Yang

We model the HUDNs as a heterogeneous graph and train a Graph Neural Network (GNN) to approach this representation function by using semi-supervised learning, in which the loss function is composed of the unsupervised part that helps the GNN approach the optimal representation function and the supervised part that utilizes the previous experience to reduce useless exploration.

Computational Efficiency Representation Learning

Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models

1 code implementation26 Aug 2020 Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney

Replacing BN with the recently-proposed Group Normalization (GN) can reduce gradient diversity and improve test accuracy.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.