no code implementations • 3 Feb 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.
1 code implementation • 19 Jan 2024 • Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.
no code implementations • 6 Jan 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey
Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.
1 code implementation • 15 Nov 2022 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
We find that, compared to images, it can be more challenging to achieve the two goals on time series.
1 code implementation • NeurIPS 2021 • Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma
Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.
1 code implementation • NeurIPS 2021 • Jiabo He, Sarah Monazam Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
Bounding box (bbox) regression is a fundamental task in computer vision.
1 code implementation • 21 Apr 2021 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.
1 code implementation • ICLR 2021 • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
This paper raises the question: \emph{can data be made unlearnable for deep learning models?}
no code implementations • 3 Aug 2019 • Yixin Su, Sarah Monazam Erfani, Rui Zhang
Collaborative filtering is one of the most popular techniques in designing recommendation systems, and its most representative model, matrix factorization, has been wildly used by researchers and the industry.
3 code implementations • 7 May 2019 • Sukarna Barua, Sarah Monazam Erfani, James Bailey
Generative Adversarial Networks (GANs) are a powerful class of generative models.
no code implementations • 2 May 2019 • Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey
In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.
no code implementations • 22 Jul 2018 • Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha
Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.
no code implementations • ICLR 2018 • Prameesha Sandamal Weerasinghe, Tansu Alpcan, Sarah Monazam Erfani, Christopher Leckie
Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries.