1 code implementation • 14 Feb 2023 • Michael Crawshaw, Yajie Bao, Mingrui Liu
In this paper, we design EPISODE, the very first algorithm to solve FL problems with heterogeneous data in the nonconvex and relaxed smoothness setting.
no code implementations • 23 Aug 2022 • Michael Crawshaw, Mingrui Liu, Francesco Orabona, Wei zhang, Zhenxun Zhuang
We also compare these algorithms with popular optimizers on a set of deep learning tasks, observing that we can match the performance of Adam while beating the others.
no code implementations • 17 Jul 2022 • Yajie Bao, Michael Crawshaw, Shan Luo, Mingrui Liu
This paper investigates a class of composite optimization and statistical recovery problems in the FL setting, whose loss function consists of a data-dependent smooth loss and a non-smooth regularizer.
1 code implementation • 16 Sep 2021 • Michael Crawshaw, Jana Košecká
The best MTL optimization methods require individually computing the gradient of each task's loss function, which impedes scalability to a large number of tasks.
no code implementations • 10 Sep 2020 • Michael Crawshaw
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.