Search Results for author: Aishan Liu

Found 41 papers, 20 papers with code

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

1 code implementation24 Mar 2024 Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao

This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.

Incremental Learning Object

Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs

no code implementations21 Feb 2024 Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang

Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.

Code Generation Semantic Similarity +1

Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection

1 code implementation18 Feb 2024 Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao

However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.

Backdoor Attack

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

no code implementations20 Nov 2023 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Adversarial Examples in the Physical World: A Survey

1 code implementation1 Nov 2023 Jiakai Wang, Donghua Wang, Jin Hu, Siyang Wu, Tingsong Jiang, Wen Yao, Aishan Liu, Xianglong Liu

However, current research on physical adversarial examples (PAEs) lacks a comprehensive understanding of their unique characteristics, leading to limited significance and understanding.

MIR2: Towards Provably Robust Multi-Agent Reinforcement Learning by Mutual Information Regularization

no code implementations15 Oct 2023 Simin Li, Ruixiao Xu, Jun Guo, Pu Feng, Jiakai Wang, Aishan Liu, Yaodong Yang, Xianglong Liu, Weifeng Lv

Existing max-min optimization techniques in robust MARL seek to enhance resilience by training agents against worst-case adversaries, but this becomes intractable as the number of agents grows, leading to exponentially increasing worst-case scenarios.

Multi-agent Reinforcement Learning Starcraft +1

Face Encryption via Frequency-Restricted Identity-Agnostic Attacks

no code implementations11 Aug 2023 Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing

As for the weak black-box scenario feasibility, we obverse that representations of the average feature in multiple face recognition models are similar, thus we propose to utilize the average feature via the crawled dataset from the Internet as the target to guide the generation, which is also agnostic to identities of unknown face recognition systems; in nature, the low-frequency perturbations are more visually perceptible by the human vision system.

Face Recognition

RobustMQ: Benchmarking Robustness of Quantized Models

no code implementations4 Aug 2023 Yisong Xiao, Aishan Liu, Tianyuan Zhang, Haotong Qin, Jinyang Guo, Xianglong Liu

Quantization has emerged as an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.

Adversarial Robustness Benchmarking +1

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

1 code implementation2 Aug 2023 Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

no code implementations1 Jul 2023 Yan Wang, Yuhang Li, Ruihao Gong, Aishan Liu, Yanfei Wang, Jian Hu, Yongqiang Yao, Yunchen Zhang, Tianzi Xiao, Fengwei Yu, Xianglong Liu

Extensive studies have shown that deep learning models are vulnerable to adversarial and natural noises, yet little is known about model robustness on noises caused by different system implementations.

Benchmarking Data Augmentation +5

FAIRER: Fairness as Decision Rationale Alignment

no code implementations27 Jun 2023 Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu

Existing fairness regularization terms fail to achieve decision rationale alignment because they only constrain last-layer outputs while ignoring intermediate neuron alignment.

Fairness

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

1 code implementation19 Feb 2023 Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, DaCheng Tao

In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario.

Adversarial Attack

Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence

1 code implementation7 Feb 2023 Simin Li, Jun Guo, Jingqiao Xiu, Pu Feng, Xin Yu, Aishan Liu, Wenjun Wu, Xianglong Liu

To achieve maximum deviation in victim policies under complex agent-wise interactions, our unilateral attack aims to characterize and maximize the impact of the adversary on the victims.

Continuous Control reinforcement-learning +4

Exploring the Relationship Between Architectural Design and Adversarially Robust Generalization

no code implementations CVPR 2023 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

In particular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple l_p-norm adversarial attacks.

Exploring the Relationship between Architecture and Adversarially Robust Generalization

no code implementations28 Sep 2022 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

Inparticular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple `p-norm adversarial attacks.

Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation

1 code implementation20 Sep 2022 Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao

Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.

Data Augmentation Knowledge Distillation +2

Improving Robust Fairness via Balance Adversarial Training

no code implementations15 Sep 2022 ChunYu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, Xianglong Liu, Aishan Liu

Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem.

Fairness

Universal Backdoor Attacks Detection via Adaptive Adversarial Probe

no code implementations12 Sep 2022 Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu

Most detection methods are designed to verify whether a model is infected with presumed types of backdoor attacks, yet the adversary is likely to generate diverse backdoor attacks in practice that are unforeseen to defenders, which challenge current detection strategies.

Scheduling

Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection

1 code implementation23 Aug 2022 Simin Li, Huangxinxin Xu, Jiakai Wang, Aishan Liu, Fazhi He, Xianglong Liu, DaCheng Tao

The threat of fingerprint leakage from social media raises a strong desire for anonymizing shared images while maintaining image qualities, since fingerprints act as a lifelong individual biometric password.

Adversarial Attack

Defensive Patches for Robust Recognition in the Physical World

1 code implementation CVPR 2022 Jiakai Wang, Zixin Yin, Pengfei Hu, Aishan Liu, Renshuai Tao, Haotong Qin, Xianglong Liu, DaCheng Tao

For the generalization against diverse noises, we inject class-specific identifiable patterns into a confined local patch prior, so that defensive patches could preserve more recognizable features towards specific classes, leading models for better recognition under noises.

BiBERT: Accurate Fully Binarized BERT

1 code implementation ICLR 2022 Haotong Qin, Yifu Ding, Mingyuan Zhang, Qinghua Yan, Aishan Liu, Qingqing Dang, Ziwei Liu, Xianglong Liu

The large pre-trained BERT has achieved remarkable performance on Natural Language Processing (NLP) tasks but is also computation and memory expensive.

Binarization

Exploring Endogenous Shift for Cross-Domain Detection: A Large-Scale Benchmark and Perturbation Suppression Network

1 code implementation CVPR 2022 Renshuai Tao, Hainan Li, Tianbo Wang, Yanlu Wei, Yifu Ding, Bowei Jin, Hongping Zhi, Xianglong Liu, Aishan Liu

To handle the endogenous shift, we further introduce the Perturbation Suppression Network (PSN), motivated by the fact that this shift is mainly caused by two types of perturbations: category-dependent and category-independent ones.

Medical Diagnosis

Harnessing Perceptual Adversarial Patches for Crowd Counting

1 code implementation16 Sep 2021 Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao, Xianglong Liu, DaCheng Tao

Crowd counting, which has been widely adopted for estimating the number of people in safety-critical scenes, is shown to be vulnerable to adversarial examples in the physical world (e. g., adversarial patches).

Crowd Counting

RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

1 code implementation11 Sep 2021 Shiyu Tang, Ruihao Gong, Yan Wang, Aishan Liu, Jiakai Wang, Xinyun Chen, Fengwei Yu, Xianglong Liu, Dawn Song, Alan Yuille, Philip H. S. Torr, DaCheng Tao

Thus, we propose RobustART, the first comprehensive Robustness investigation benchmark on ImageNet regarding ARchitecture design (49 human-designed off-the-shelf architectures and 1200+ networks from neural architecture search) and Training techniques (10+ techniques, e. g., data augmentation) towards diverse noises (adversarial, natural, and system noises).

Adversarial Robustness Benchmarking +2

ARShoe: Real-Time Augmented Reality Shoe Try-on System on Smartphones

no code implementations24 Aug 2021 Shan An, Guangfu Che, Jinghao Guo, Haogang Zhu, Junjie Ye, Fangru Zhou, Zhaoqi Zhu, Dong Wei, Aishan Liu, Wei zhang

To this concern, this work proposes a real-time augmented reality virtual shoe try-on system for smartphones, namely ARShoe.

Pose Estimation Virtual Try-on

Over-sampling De-occlusion Attention Network for Prohibited Items Detection in Noisy X-ray Images

1 code implementation1 Mar 2021 Renshuai Tao, Yanlu Wei, Hainan Li, Aishan Liu, Yifu Ding, Haotong Qin, Xianglong Liu

The images are gathered from an airport and these prohibited items are annotated manually by professional inspectors, which can be used as a benchmark for model training and further facilitate future research.

object-detection Object Detection

A Comprehensive Evaluation Framework for Deep Model Robustness

no code implementations24 Jan 2021 Jun Guo, Wei Bao, Jiakai Wang, Yuqing Ma, Xinghai Gao, Gang Xiao, Aishan Liu, Jian Dong, Xianglong Liu, Wenjun Wu

To mitigate this problem, we establish a model robustness evaluation framework containing 23 comprehensive and rigorous metrics, which consider two key perspectives of adversarial learning (i. e., data and model).

Adversarial Defense

Towards Defending Multiple $\ell_p$-norm Bounded Adversarial Perturbations via Gated Batch Normalization

1 code implementation3 Dec 2020 Aishan Liu, Shiyu Tang, Xinyun Chen, Lei Huang, Haotong Qin, Xianglong Liu, DaCheng Tao

In this paper, we observe that different $\ell_p$ bounded adversarial perturbations induce different statistical properties that can be separated and characterized by the statistics of Batch Normalization (BN).

On the Guaranteed Almost Equivalence between Imitation Learning from Observation and Demonstration

no code implementations16 Oct 2020 Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, DaCheng Tao

By contrast, this paper proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness.

Imitation Learning

Bias-based Universal Adversarial Patch Attack for Automatic Check-out

1 code implementation ECCV 2020 Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang, Hang Yu

To address the problem, this paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability, which exploits both the perceptual and semantic bias of models.

Spatiotemporal Attacks for Embodied Agents

1 code implementation ECCV 2020 Aishan Liu, Tairan Huang, Xianglong Liu, Yitao Xu, Yuqing Ma, Xinyun Chen, Stephen J. Maybank, DaCheng Tao

Adversarial attacks are valuable for providing insights into the blind-spots of deep learning models and help improve their robustness.

Navigate

Region-wise Generative Adversarial ImageInpainting for Large Missing Areas

1 code implementation27 Sep 2019 Yuqing Ma, Xianglong Liu, Shihao Bai, Lei Wang, Aishan Liu, DaCheng Tao, Edwin Hancock

To address these problems, we propose a generic inpainting framework capable of handling with incomplete images on both continuous and discontinuous large missing areas, in an adversarial manner.

Image Inpainting

Training Robust Deep Neural Networks via Adversarial Noise Propagation

no code implementations19 Sep 2019 Aishan Liu, Xianglong Liu, Chongzhi Zhang, Hang Yu, Qiang Liu, DaCheng Tao

Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models.

Adversarial Defense Adversarial Robustness

Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity

no code implementations16 Sep 2019 Chongzhi Zhang, Aishan Liu, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, Tianlin Li

In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting.

Adversarial Robustness Decision Making

PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks

no code implementations11 Sep 2019 Hang Yu, Aishan Liu, Xianglong Liu, Gengchao Li, Ping Luo, Ran Cheng, Jichen Yang, Chongzhi Zhang

In other words, DNNs trained with PDA are able to obtain more robustness against both adversarial attacks as well as common corruptions than the recent state-of-the-art methods.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.