no code implementations • 17 Mar 2024 • Xuanqi Liu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu
In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption.
1 code implementation • 17 Mar 2024 • Jinzhu Yan, Haotian Xu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu, Jianping Wu
Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly.
no code implementations • 2 Mar 2024 • Qi Tan, Qi Li, Yi Zhao, Zhuotao Liu, Xiaobing Guo, Ke Xu
According to the channel model, we propose algorithms to constrain the information transmitted in a single round of local training.
1 code implementation • 28 May 2023 • Xuanqi Liu, Zhuotao Liu
The community explored to build private inference frameworks for transformer-based large language models (LLMs) in a server-client setting, where the server holds the model parameters and the client inputs its private data (or prompt) for inference.
no code implementations • 21 Aug 2021 • Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu
We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation.