Search Results for author: Sunwoo Lee

Found 12 papers, 3 papers with code

FoX: Formation-aware exploration in multi-agent reinforcement learning

1 code implementation22 Aug 2023 Yonghyeon Jo, Sunwoo Lee, Junghyuk Yeom, Seungyul Han

Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks.

reinforcement-learning SMAC+ +2

mL-BFGS: A Momentum-based L-BFGS for Distributed Large-Scale Neural Network Optimization

no code implementations25 Jul 2023 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

Quasi-Newton methods still face significant challenges in training large-scale neural networks due to additional compute costs in the Hessian related computations and instability issues in stochastic training.

Stochastic Optimization

WHAT, WHEN, and HOW to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue

no code implementations6 Jun 2023 Deuksin Kwon, Sunwoo Lee, Ki Hyun Kim, Seojin Lee, Taeyoon Kim, Eric Davis

This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns.

Response Generation

TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with Adaptive Partial Training

no code implementations14 Apr 2023 Tuo Zhang, Lei Gao, Sunwoo Lee, Mi Zhang, Salman Avestimehr

However, we show empirically that this method can lead to a substantial drop in training accuracy as well as a slower convergence rate.

Federated Learning

FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training

1 code implementation3 Mar 2023 Zhenheng Tang, Xiaowen Chu, Ryan Yide Ran, Sunwoo Lee, Shaohuai Shi, Yonggang Zhang, Yuxin Wang, Alex Qiaozhong Liang, Salman Avestimehr, Chaoyang He

It improves the training efficiency, remarkably relaxes the requirements on the hardware, and supports efficient large-scale FL experiments with stateful clients by: (1) sequential training clients on devices; (2) decomposing original aggregation into local and global aggregation on devices and server respectively; (3) scheduling tasks to mitigate straggler problems and enhance computing utility; (4) distributed client state manager to support various FL algorithms.

Federated Learning Scheduling

Federated Learning of Large Models at the Edge via Principal Sub-Model Training

1 code implementation28 Aug 2022 Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr

However, the heterogeneous-client setting requires some clients to train full model, which is not aligned with the resource-constrained setting; while the latter ones break privacy promises in FL when sharing intermediate representations or labels with the server.

Federated Learning

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits

no code implementations11 Jan 2022 Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr

We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.

Federated Learning

Layer-wise Adaptive Model Aggregation for Scalable Federated Learning

no code implementations19 Oct 2021 Sunwoo Lee, Tuo Zhang, Chaoyang He, Salman Avestimehr

In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters.

Federated Learning

SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision

no code implementations6 Oct 2021 Chaoyang He, Zhengyu Yang, Erum Mushtaq, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

In this paper we propose self-supervised federated learning (SSFL), a unified self-supervised and personalized federated learning framework, and a series of algorithms under this framework which work towards addressing these challenges.

Personalized Federated Learning Self-Supervised Learning

Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization

no code implementations ICLR 2022 Sunwoo Lee, Jeongwoo Park, Dongsuk Jeon

In this paper, we propose a method to efficiently find an optimal format without actual training of deep neural networks.

Quantization

Achieving Small-Batch Accuracy with Large-Batch Scalability via Adaptive Learning Rate Adjustment

no code implementations29 Sep 2021 Sunwoo Lee, Salman Avestimehr

The framework performs extra epochs using the large learning rate even after the loss is flattened.

SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks

no code implementations29 Sep 2021 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e. g. KFAC); 2) convergence instability due to stochastic training (e. g. L-BFGS).

Second-order methods

Cannot find the paper you are looking for? You can Submit a new open access paper.