no code implementations • 21 Apr 2024 • Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr
With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media.
no code implementations • 10 Mar 2024 • Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr
Federated learning (FL) is a distributed machine learning paradigm that enables training models on decentralized data.
no code implementations • 4 Mar 2024 • Hyejun Jeong, Shiqing Ma, Amir Houmansadr
This SoK paper aims to take a deep look at the \emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field.
no code implementations • 7 Dec 2023 • Yuefeng Peng, Ali Naseh, Amir Houmansadr
A unique feature of our defense is that it works on input samples only, without modifying the training or inference phase of the target model.
no code implementations • 6 Dec 2023 • Ali Naseh, Jaechul Roh, Amir Houmansadr
Diffusion-based models, such as the Stable Diffusion model, have revolutionized text-to-image synthesis with their ability to produce high-quality, high-resolution images.
no code implementations • 6 Dec 2023 • Ali Naseh, Jaechul Roh, Amir Houmansadr
Multimodal machine learning, especially text-to-image models like Stable Diffusion and DALL-E 3, has gained significance for transforming text into detailed images.
1 code implementation • 29 Oct 2023 • Dzung Pham, Shreyas Kulkarni, Amir Houmansadr
Federated learning (FL) has recently emerged as a privacy-preserving approach for machine learning in domains that rely on user interactions, particularly recommender systems (RS) and online learning to rank (OLTR).
1 code implementation • 18 Sep 2023 • Alireza Bahramali, Ardavan Bozorgi, Amir Houmansadr
Our extensive open-world and close-world experiments demonstrate that under practical evaluation settings, our WF attacks provide superior performances compared to the state-of-the-art; this is due to their use of augmented network traces for training, which allows them to learn the features of target traffic in unobserved settings.
1 code implementation • 8 Mar 2023 • Ali Naseh, Kalpesh Krishna, Mohit Iyyer, Amir Houmansadr
A key component of generating text from modern language models (LM) is the selection and tuning of decoding algorithms.
no code implementations • 4 Dec 2022 • Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad Anwar
We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality.
no code implementations • 20 May 2022 • Hamid Mozaffari, Amir Houmansadr
Federated Learning (FL) enables data owners to train a shared global model without sharing their private data.
no code implementations • 15 Oct 2021 • Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal
The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.
no code implementations • 8 Oct 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
The FRL server uses a voting mechanism to aggregate the parameter rankings submitted by clients in each training epoch to generate the global ranking of the next training epoch.
no code implementations • 29 Sep 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks.
1 code implementation • 23 Aug 2021 • Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage
While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.
no code implementations • 1 Feb 2021 • Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley
We show that in the presence of defense mechanisms deployed by the communicating parties, our attack performs significantly better compared to existing attacks against DNN-based wireless systems.
Adversarial Attack Cryptography and Security
no code implementations • 22 Jul 2020 • Milad Nasr, Reza Shokri, Amir Houmansadr
We show that our mechanism outperforms the state-of-the-art DPSGD; for instance, for the same model accuracy of $96. 1\%$ on MNIST, our technique results in a privacy bound of $\epsilon=3. 2$ compared to $\epsilon=6$ of DPSGD, which is a significant improvement.
1 code implementation • 16 Feb 2020 • Milad Nasr, Alireza Bahramali, Amir Houmansadr
Deep Neural Networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e. g., statistical) techniques by large margins.
no code implementations • 24 Dec 2019 • Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr
Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.
no code implementations • 15 Jun 2019 • Virat Shejwalkar, Amir Houmansadr
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.
4 code implementations • 3 Dec 2018 • Milad Nasr, Reza Shokri, Amir Houmansadr
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
no code implementations • 22 Aug 2018 • Milad Nasr, Alireza Bahramali, Amir Houmansadr
Flow correlation is the core technique used in a multitude of deanonymization attacks on Tor.
1 code implementation • 16 Jul 2018 • Milad Nasr, Reza Shokri, Amir Houmansadr
In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.
no code implementations • 30 Sep 2017 • Nazanin Takbiri, Amir Houmansadr, Dennis L. Goeckel, Hossein Pishro-Nik
Here we derive the fundamental limits of user privacy when both anonymization and obfuscation-based protection mechanisms are applied to users' time series of data.
Information Theory Cryptography and Security Information Theory
1 code implementation • 14 Nov 2012 • Amir Houmansadr, Wenxuan Zhou, Matthew Caesar, Nikita Borisov
As the operation of SWEET is not bound to specific email providers we argue that a censor will need to block all email communications in order to disrupt SWEET, which is infeasible as email constitutes an important part of today's Internet.
Cryptography and Security