Search Results for author: Yingjie Lao

Found 13 papers, 3 papers with code

Fully Attentional Networks with Self-emerging Token Labeling

1 code implementation ICCV 2023 Bingyin Zhao, Zhiding Yu, Shiyi Lan, Yutao Cheng, Anima Anandkumar, Yingjie Lao, Jose M. Alvarez

With the proposed STL framework, our best model based on FAN-L-Hybrid (77. 3M parameters) achieves 84. 8% Top-1 accuracy and 42. 1% mCE on ImageNet-1K and ImageNet-C, and sets a new state-of-the-art for ImageNet-A (46. 1%) and ImageNet-R (56. 6%) without using extra data, outperforming the original FAN counterpart by significant margins.

Semantic Segmentation

Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks

no code implementations1 Oct 2023 Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan

Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.

Machine Unlearning in Gradient Boosting Decision Trees

1 code implementation KDD 2023 Huawei Lin, Jun Woo Chung, Yingjie Lao, Weijie Zhao

To the best of our knowledge, this is the first work that considers machine unlearning on GBDT.

Machine Unlearning

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

no code implementations17 Oct 2022 Khoa D. Doan, Yingjie Lao, Ping Li

To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model.

Backdoor Attack

NL2GDPR: Automatically Develop GDPR Compliant Android Application Features from Natural Language

no code implementations29 Aug 2022 Faysal Hossain Shezan, Yingjie Lao, Minlong Peng, Xin Wang, Mingming Sun, Ping Li

At the core, NL2GDPR is a privacy-centric information extraction model, appended with a GDPR policy finder and a policy generator.

DeepAuth: A DNN Authentication Framework by Model-Unique and Fragile Signature Embedding

no code implementations Proceedings of the AAAI Conference on Artificial Intelligence 2022 Yingjie Lao, Weijie Zhao, Peng Yang, Ping Li

After embedding, each model will respond distinctively to these key samples, which creates a model-unique signature as a strong tool for authentication and user identity.

Defending Backdoor Attacks on Vision Transformer via Patch Processing

no code implementations24 Jun 2022 Khoa D. Doan, Yingjie Lao, Peng Yang, Ping Li

We first examine the vulnerability of ViTs against various backdoor attacks and find that ViTs are also quite vulnerable to existing attacks.

Backdoor Attack Inductive Bias

Backdoor Attack with Imperceptible Input and Latent Modification

no code implementations NeurIPS 2021 Khoa Doan, Yingjie Lao, Ping Li

Many existing countermeasures found that backdoor tends to leave tangible footprints in the latent or feature space, which can be utilized to mitigate backdoor attacks. In this paper, we extend the concept of imperceptible backdoor from the input space to the latent representation, which significantly improves the effectiveness against the existing defense mechanisms, especially those relying on the distinguishability between clean inputs and backdoor inputs in latent space.

Backdoor Attack

Robust Watermarking for Deep Neural Networks via Bi-Level Optimization

no code implementations ICCV 2021 Peng Yang, Yingjie Lao, Ping Li

Deep neural networks (DNNs) have become state-of-the-art in many application domains.

LIRA: Learnable, Imperceptible and Robust Backdoor Attacks

2 code implementations ICCV 2021 Khoa Doan, Yingjie Lao, Weijie Zhao, Ping Li

Under this optimization framework, the trigger generator function will learn to manipulate the input with imperceptible noise to preserve the model performance on the clean data and maximize the attack success rate on the poisoned data.

Backdoor Attack backdoor defense +1

Towards Class-Oriented Poisoning Attacks Against Neural Networks

no code implementations31 Jul 2020 Bingyin Zhao, Yingjie Lao

Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset to influence the training process.

Rallying Adversarial Techniques against Deep Learning for Network Security

no code implementations27 Mar 2019 Joseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, Yingjie Lao

Recent advances in artificial intelligence and the increasing need for powerful defensive measures in the domain of network security, have led to the adoption of deep learning approaches for use in network intrusion detection systems.

BIG-bench Machine Learning Network Intrusion Detection

Hardware Trojan Attacks on Neural Networks

no code implementations14 Jun 2018 Joseph Clements, Yingjie Lao

With the rising popularity of machine learning and the ever increasing demand for computational power, there is a growing need for hardware optimized implementations of neural networks and other machine learning models.

BIG-bench Machine Learning Neural Network Security

Cannot find the paper you are looking for? You can Submit a new open access paper.