no code implementations • 7 Jan 2024 • Shilong Yuan, Wei Yuan, Hongzhi Yin, Tieke He
While language models have made many milestones in text inference and classification tasks, they remain susceptible to adversarial attacks that can lead to unforeseen outcomes.
no code implementations • 14 May 2023 • Wei Yuan, Shilong Yuan, Chaoqun Yang, Quoc Viet Hung Nguyen, Hongzhi Yin
Therefore, when incorporating visual information in FedRecs, all existing model poisoning attacks' effectiveness becomes questionable.