1 code implementation • 15 Oct 2023 • Fanghua Ye, Meng Fang, Shenghui Li, Emine Yilmaz
Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency.
no code implementations • 14 Feb 2023 • Shenghui Li, Edith C. -H. Ngai, Thiemo Voigt
In recent years, several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients and improve the robustness of federated learning.
1 code implementation • 22 Oct 2022 • Fanghua Ye, Xi Wang, Jie Huang, Shenghui Li, Samuel Stern, Emine Yilmaz
Experimental results demonstrate that all three schemes can achieve competitive performance.
1 code implementation • 22 Jan 2021 • Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, Emine Yilmaz
Then a stacked slot self-attention is applied on these features to learn the correlations among slots.
1 code implementation • 14 Jan 2021 • Shenghui Li, Edith Ngai, Fanghua Ye, Thiemo Voigt
In this paper, we address this challenge by proposing Auto-weighted Robust Federated Learning (arfl), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources.