1 code implementation • 18 Oct 2023 • Nan Cui, Xiuling Wang, Wendy Hui Wang, Violet Chen, Yue Ning
However, as GNNs may inherit historical bias from training data and lead to discriminatory predictions, the bias of local models can be easily propagated to the global model in distributed settings.
no code implementations • 2 Sep 2022 • Xiuling Wang, Wendy Hui Wang
In this work, we focus on a particular type of privacy attacks named property inference attack (PIA) which infers the sensitive properties of the training data through the access to the target ML model.