Search Results for author: Clement Chung

Found 7 papers, 0 papers with code

Partial Federated Learning

no code implementations3 Mar 2024 Tiantian Feng, Anil Ramakrishna, Jimit Majmudar, Charith Peris, Jixuan Wang, Clement Chung, Richard Zemel, Morteza Ziyadi, Rahul Gupta

Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns.

Contrastive Learning Federated Learning

Coordinated Replay Sample Selection for Continual Federated Learning

no code implementations23 Oct 2023 Jack Good, Jimit Majmudar, Christophe Dupuy, Jixuan Wang, Charith Peris, Clement Chung, Richard Zemel, Rahul Gupta

Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history.

Continual Learning Federated Learning

End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoders

no code implementations4 May 2023 Jixuan Wang, Martin Radfar, Kai Wei, Clement Chung

It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Federated Learning with Noisy User Feedback

no code implementations NAACL 2022 Rahul Sharma, Anil Ramakrishna, Ansel MacLaughlin, Anna Rumshisky, Jimit Majmudar, Clement Chung, Salman Avestimehr, Rahul Gupta

Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy.

Federated Learning text-classification +1

Training Mixed-Domain Translation Models via Federated Learning

no code implementations NAACL 2022 Peyman Passban, Tanya Roosta, Rahul Gupta, Ankit Chadha, Clement Chung

Training mixed-domain translation models is a complex task that demands tailored architectures and costly data preparation techniques.

Benchmarking Federated Learning +3

Learnings from Federated Learning in the Real world

no code implementations8 Feb 2022 Christophe Dupuy, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, Salman Avestimehr

In this study, we evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.

Federated Learning Natural Language Understanding

Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling

no code implementations21 Dec 2020 Jixuan Wang, Kai Wei, Martin Radfar, Weiwei Zhang, Clement Chung

We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling.

Intent Detection Multi-Task Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.