Search Results for author: Michael Crawshaw

Found 5 papers, 2 papers with code

EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data

1 code implementation14 Feb 2023 Michael Crawshaw, Yajie Bao, Mingrui Liu

In this paper, we design EPISODE, the very first algorithm to solve FL problems with heterogeneous data in the nonconvex and relaxed smoothness setting.

Federated Learning

Robustness to Unbounded Smoothness of Generalized SignSGD

no code implementations23 Aug 2022 Michael Crawshaw, Mingrui Liu, Francesco Orabona, Wei zhang, Zhenxun Zhuang

We also compare these algorithms with popular optimizers on a set of deep learning tasks, observing that we can match the performance of Adam while beating the others.

Fast Composite Optimization and Statistical Recovery in Federated Learning

no code implementations17 Jul 2022 Yajie Bao, Michael Crawshaw, Shan Luo, Mingrui Liu

This paper investigates a class of composite optimization and statistical recovery problems in the FL setting, whose loss function consists of a data-dependent smooth loss and a non-smooth regularizer.

Federated Learning

SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning

1 code implementation16 Sep 2021 Michael Crawshaw, Jana Košecká

The best MTL optimization methods require individually computing the gradient of each task's loss function, which impedes scalability to a large number of tasks.

Drug Discovery Multi-Task Learning

Multi-Task Learning with Deep Neural Networks: A Survey

no code implementations10 Sep 2020 Michael Crawshaw

Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.