no code implementations • 23 Oct 2023 • Jack Good, Jimit Majmudar, Christophe Dupuy, Jixuan Wang, Charith Peris, Clement Chung, Richard Zemel, Rahul Gupta
Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history.
no code implementations • 8 Aug 2023 • Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta
Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation.
1 code implementation • 19 May 2023 • Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta
We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs.
no code implementations • 26 May 2022 • Jimit Majmudar, Christophe Dupuy, Charith Peris, Sami Smaili, Rahul Gupta, Richard Zemel
Recent large-scale natural language processing (NLP) systems use a pre-trained Large Language Model (LLM) on massive and diverse corpora as a headstart.
no code implementations • ACL 2022 • Rahil Parikh, Christophe Dupuy, Rahul Gupta
In this work, we present a version of such an attack by extracting canaries inserted in NLU training data.
no code implementations • 8 Feb 2022 • Christophe Dupuy, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, Salman Avestimehr
In this study, we evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.
no code implementations • 14 Jul 2021 • Christophe Dupuy, Radhika Arava, Rahul Gupta, Anna Rumshisky
However, the data used to train NLU models may contain private information such as addresses or phone numbers, particularly when drawn from human subjects.
1 code implementation • Findings (NAACL) 2022 • Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr
Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.
2 code implementations • EACL 2021 • Satyapriya Krishna, Rahul Gupta, Christophe Dupuy
We prove the theoretical privacy guarantee of our algorithm and assess its privacy leakage under Membership Inference Attacks(MIA) (Shokri et al., 2017) on models trained with transformed data.
no code implementations • 8 Apr 2020 • Stanislav Peshterliev, Christophe Dupuy, Imre Kiss
Recent attempts to ingest external knowledge into neural models for named-entity recognition (NER) have exhibited mixed results.
no code implementations • 19 Oct 2016 • Christophe Dupuy, Francis Bach
We propose a new class of determinantal point processes (DPPs) which can be manipulated for inference and parameter learning in potentially sublinear time in the number of items.
no code implementations • 5 Oct 2016 • Igor Colin, Christophe Dupuy
Privacy preserving networks can be modelled as decentralized networks (e. g., sensors, connected objects, smartphones), where communication between nodes of the network is not controlled by an all-knowing, central node.
no code implementations • 8 Mar 2016 • Christophe Dupuy, Francis Bach
We first propose an unified treatment of online inference for latent variable models from a non-canonical exponential family, and draw explicit links between several previously proposed frequentist or Bayesian methods.