Search Results for author: Priyank Upadhya

Found 1 papers, 0 papers with code

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

no code implementations14 Aug 2023 Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab

Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high.

Federated Learning Model Poisoning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.