no code implementations • 5 Mar 2024 • Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong
Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data.
no code implementations • 18 Feb 2024 • Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity.
no code implementations • 29 Sep 2023 • Yichang Xu, Chenwang Wu, Defu Lian
Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.