1 code implementation • ICCV 2023 • Bingyin Zhao, Zhiding Yu, Shiyi Lan, Yutao Cheng, Anima Anandkumar, Yingjie Lao, Jose M. Alvarez
With the proposed STL framework, our best model based on FAN-L-Hybrid (77. 3M parameters) achieves 84. 8% Top-1 accuracy and 42. 1% mCE on ImageNet-1K and ImageNet-C, and sets a new state-of-the-art for ImageNet-A (46. 1%) and ImageNet-R (56. 6%) without using extra data, outperforming the original FAN counterpart by significant margins.
Ranked #16 on Domain Generalization on ImageNet-C
no code implementations • 1 Oct 2023 • Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
1 code implementation • KDD 2023 • Huawei Lin, Jun Woo Chung, Yingjie Lao, Weijie Zhao
To the best of our knowledge, this is the first work that considers machine unlearning on GBDT.
no code implementations • 17 Oct 2022 • Khoa D. Doan, Yingjie Lao, Ping Li
To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.
no code implementations • 29 Aug 2022 • Faysal Hossain Shezan, Yingjie Lao, Minlong Peng, Xin Wang, Mingming Sun, Ping Li
At the core, NL2GDPR is a privacy-centric information extraction model, appended with a GDPR policy finder and a policy generator.
no code implementations • Proceedings of the AAAI Conference on Artificial Intelligence 2022 • Yingjie Lao, Weijie Zhao, Peng Yang, Ping Li
After embedding, each model will respond distinctively to these key samples, which creates a model-unique signature as a strong tool for authentication and user identity.
no code implementations • 24 Jun 2022 • Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li
We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.
no code implementations • NeurIPS 2021 • Khoa Doan, Yingjie Lao, Ping Li
Many existing countermeasures found that backdoor tends to leave tangible footprints in the latent or feature space, which can be utilized to mitigate backdoor attacks. In this paper, we extend the concept of imperceptible backdoor from the input space to the latent representation, which significantly improves the effectiveness against the existing defense mechanisms, especially those relying on the distinguishability between clean inputs and backdoor inputs in latent space.
no code implementations • ICCV 2021 • Peng Yang, Yingjie Lao, Ping Li
Deep neural networks (DNNs) have become state-of-the-art in many application domains.
2 code implementations • ICCV 2021 • Khoa Doan, Yingjie Lao, Weijie Zhao, Ping Li
Under this optimization framework, the trigger generator function will learn to manipulate the input with imperceptible noise to preserve the model performance on the clean data and maximize the attack success rate on the poisoned data.
no code implementations • 31 Jul 2020 • Bingyin Zhao, Yingjie Lao
Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset to influence the training process.
no code implementations • 27 Mar 2019 • Joseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, Yingjie Lao
Recent advances in artificial intelligence and the increasing need for powerful defensive measures in the domain of network security, have led to the adoption of deep learning approaches for use in network intrusion detection systems.
no code implementations • 14 Jun 2018 • Joseph Clements, Yingjie Lao
With the rising popularity of machine learning and the ever increasing demand for computational power, there is a growing need for hardware optimized implementations of neural networks and other machine learning models.