Search Results for author: Amir Houmansadr

Found 25 papers, 8 papers with code

Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images

no code implementations21 Apr 2024 Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr

With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media.

Descriptive

Fake or Compromised? Making Sense of Malicious Clients in Federated Learning

no code implementations10 Mar 2024 Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr

Federated learning (FL) is a distributed machine learning paradigm that enables training models on decentralized data.

Federated Learning

SoK: Challenges and Opportunities in Federated Unlearning

no code implementations4 Mar 2024 Hyejun Jeong, Shiqing Ma, Amir Houmansadr

This SoK paper aims to take a deep look at the \emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field.

Federated Learning Machine Unlearning

Diffence: Fencing Membership Privacy With Diffusion Models

no code implementations7 Dec 2023 Yuefeng Peng, Ali Naseh, Amir Houmansadr

A unique feature of our defense is that it works on input samples only, without modifying the training or inference phase of the target model.

Memory Triggers: Unveiling Memorization in Text-To-Image Generative Models through Word-Level Duplication

no code implementations6 Dec 2023 Ali Naseh, Jaechul Roh, Amir Houmansadr

Diffusion-based models, such as the Stable Diffusion model, have revolutionized text-to-image synthesis with their ability to produce high-quality, high-resolution images.

Image Generation Memorization

Understanding (Un)Intended Memorization in Text-to-Image Generative Models

no code implementations6 Dec 2023 Ali Naseh, Jaechul Roh, Amir Houmansadr

Multimodal machine learning, especially text-to-image models like Stable Diffusion and DALL-E 3, has gained significance for transforming text into detailed images.

Image Generation Memorization

RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Active Data Manipulation

1 code implementation29 Oct 2023 Dzung Pham, Shreyas Kulkarni, Amir Houmansadr

Federated learning (FL) has recently emerged as a privacy-preserving approach for machine learning in domains that rely on user interactions, particularly recommender systems (RS) and online learning to rank (OLTR).

Federated Learning Information Retrieval +4

Realistic Website Fingerprinting By Augmenting Network Trace

1 code implementation18 Sep 2023 Alireza Bahramali, Ardavan Bozorgi, Amir Houmansadr

Our extensive open-world and close-world experiments demonstrate that under practical evaluation settings, our WF attacks provide superior performances compared to the state-of-the-art; this is due to their use of augmented network traces for training, which allows them to learn the features of target traffic in unobserved settings.

Self-Supervised Learning

Stealing the Decoding Algorithms of Language Models

1 code implementation8 Mar 2023 Ali Naseh, Kalpesh Krishna, Mohit Iyyer, Amir Houmansadr

A key component of generating text from modern language models (LM) is the selection and tuning of decoding algorithms.

Text Generation

Security Analysis of SplitFed Learning

no code implementations4 Dec 2022 Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad Anwar

We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality.

Federated Learning Model Poisoning

E2FL: Equal and Equitable Federated Learning

no code implementations20 May 2022 Hamid Mozaffari, Amir Houmansadr

Federated Learning (FL) enables data owners to train a shared global model without sharing their private data.

Fairness Federated Learning

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

no code implementations15 Oct 2021 Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal

The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.

Privacy Preserving

FRL: Federated Rank Learning

no code implementations8 Oct 2021 Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr

The FRL server uses a voting mechanism to aggregate the parameter rankings submitted by clients in each training epoch to generate the global ranking of the next training epoch.

Federated Learning

FSL: Federated Supermask Learning

no code implementations29 Sep 2021 Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr

FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks.

Federated Learning

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning

1 code implementation23 Aug 2021 Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage

While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.

Federated Learning Misconceptions +1

Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems

no code implementations1 Feb 2021 Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley

We show that in the presence of defense mechanisms deployed by the communicating parties, our attack performs significantly better compared to existing attacks against DNN-based wireless systems.

Adversarial Attack Cryptography and Security

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

no code implementations22 Jul 2020 Milad Nasr, Reza Shokri, Amir Houmansadr

We show that our mechanism outperforms the state-of-the-art DPSGD; for instance, for the same model accuracy of $96. 1\%$ on MNIST, our technique results in a privacy bound of $\epsilon=3. 2$ compared to $\epsilon=6$ of DPSGD, which is a significant improvement.

Denoising

Blind Adversarial Network Perturbations

1 code implementation16 Feb 2020 Milad Nasr, Alireza Bahramali, Amir Houmansadr

Deep Neural Networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e. g., statistical) techniques by large margins.

Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer

no code implementations24 Dec 2019 Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr

Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.

Federated Learning Privacy Preserving +1

Membership Privacy for Machine Learning Models Through Knowledge Transfer

no code implementations15 Jun 2019 Virat Shejwalkar, Amir Houmansadr

Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.

BIG-bench Machine Learning General Classification +4

DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning

no code implementations22 Aug 2018 Milad Nasr, Alireza Bahramali, Amir Houmansadr

Flow correlation is the core technique used in a multitude of deanonymization attacks on Tor.

Machine Learning with Membership Privacy using Adversarial Regularization

1 code implementation16 Jul 2018 Milad Nasr, Reza Shokri, Amir Houmansadr

In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.

BIG-bench Machine Learning General Classification +2

Matching Anonymized and Obfuscated Time Series to Users' Profiles

no code implementations30 Sep 2017 Nazanin Takbiri, Amir Houmansadr, Dennis L. Goeckel, Hossein Pishro-Nik

Here we derive the fundamental limits of user privacy when both anonymization and obfuscation-based protection mechanisms are applied to users' time series of data.

Information Theory Cryptography and Security Information Theory

SWEET: Serving the Web by Exploiting Email Tunnels

1 code implementation14 Nov 2012 Amir Houmansadr, Wenxuan Zhou, Matthew Caesar, Nikita Borisov

As the operation of SWEET is not bound to specific email providers we argue that a censor will need to block all email communications in order to disrupt SWEET, which is infeasible as email constitutes an important part of today's Internet.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.