no code implementations • 27 Jan 2024 • Yige Li, Xingjun Ma, Jiabo He, Hanxun Huang, Yu-Gang Jiang
Arguably, real-world backdoor attacks can be much more complex, e. g., the existence of multiple adversaries for the same dataset if it is of high value.
1 code implementation • 19 Jan 2024 • Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.
1 code implementation • 26 Jan 2023 • Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey
We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.
1 code implementation • NeurIPS 2021 • Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma
Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.
1 code implementation • ICLR 2021 • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
This paper raises the question: \emph{can data be made unlearnable for deep learning models?}
no code implementations • 1 Jan 2021 • Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey
NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.
1 code implementation • 24 Jun 2020 • Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.
4 code implementations • ICML 2020 • Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey
However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.
Ranked #30 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)