no code implementations • 31 Oct 2023 • Shahbaz Rezaei, Mohammad Sadegh Norouzzadeh
We propose a unified framework consisting of a corruption-detection model and BN statistics update that improves the corruption accuracy of any off-the-shelf trained model.
no code implementations • 6 Dec 2022 • Shahbaz Rezaei, Xin Liu
We argue that current membership inference attacks can identify memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during the training.
no code implementations • 4 Mar 2022 • Shahbaz Rezaei, Xin Liu
The intuition is that the model response should not be significantly different between the target sample and its subpopulation if it was not a training sample.
no code implementations • 4 Mar 2022 • Guoyao Li, Shahbaz Rezaei, Xin Liu
In this paper, we develop a user-level MI attack where the goal is to find if any sample from the target user has been used during training even when no exact training sample is available to the attacker.
1 code implementation • 12 May 2021 • Shahbaz Rezaei, Zubair Shafiq, Xin Liu
We analyze the impact of various factors in deep ensembles and demonstrate the root cause of the trade-off.
1 code implementation • CVPR 2021 • Shahbaz Rezaei, Xin Liu
Recent studies propose membership inference (MI) attacks on deep models, where the goal is to infer if a sample has been used in the training process.
no code implementations • 8 Dec 2019 • Shahbaz Rezaei, Xin Liu
Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied.
1 code implementation • 12 Jun 2019 • Shahbaz Rezaei, Xin Liu
We show that with a large amount of easily obtainable data samples for bandwidth and duration prediction tasks, and only a few data samples for the traffic classification task, one can achieve high accuracy.
1 code implementation • ICLR 2020 • Shahbaz Rezaei, Xin Liu
Due to insufficient training data and the high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications.