1 code implementation • 19 May 2023 • Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta
We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs.
1 code implementation • 29 Nov 2021 • Mustafa Safa Ozdayi, Murat Kantarcioglu
Particularly, as the data distributions of agents differ, the accuracy of the trained models drop.
1 code implementation • 3 Jun 2021 • Mustafa Safa Ozdayi, Murat Kantarcioglu, Rishabh Iyer
In such a setting, we design fair training algorithms which exhibit both good utility, and low bias.
no code implementations • 14 Oct 2020 • Harsh Bimal Desai, Mustafa Safa Ozdayi, Murat Kantarcioglu
It has been shown that an attacker can inject backdoors to the trained model during FL, and then can leverage the backdoor to make the model misclassify later.
no code implementations • 14 Oct 2020 • Mustafa Safa Ozdayi, Murat Kantarcioglu, Rishabh Iyer
Particularly, in settings where local data distributions vastly differ among agents, FL performs rather poorly with respect to the centralized training.
1 code implementation • 7 Jul 2020 • Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel
In addition, we also provide convergence rate analysis for our proposed scheme.