Search Results for author: Sarah Monazam Erfani

Found 13 papers, 7 papers with code

Unlearnable Examples For Time Series

no code implementations3 Feb 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.

Time Series

LDReg: Local Dimensionality Regularized Self-Supervised Learning

1 code implementation19 Jan 2024 Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey

Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.

Self-Supervised Learning

End-to-End Anti-Backdoor Learning on Images and Time Series

no code implementations6 Jan 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey

Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.

Image Classification Time Series

Backdoor Attacks on Time Series: A Generative Approach

1 code implementation15 Nov 2022 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

We find that, compared to images, it can be more challenging to achieve the two goals on time series.

Time Series Time Series Analysis

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Dual Head Adversarial Training

1 code implementation21 Apr 2021 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.

MMF: Attribute Interpretable Collaborative Filtering

no code implementations3 Aug 2019 Yixin Su, Sarah Monazam Erfani, Rui Zhang

Collaborative filtering is one of the most popular techniques in designing recommendation systems, and its most representative model, matrix factorization, has been wildly used by researchers and the industry.

Attribute Collaborative Filtering +1

Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality

no code implementations2 May 2019 Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey

In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines

no code implementations ICLR 2018 Prameesha Sandamal Weerasinghe, Tansu Alpcan, Sarah Monazam Erfani, Christopher Leckie

Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries.

Anomaly Detection BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.