Search Results for author: You Qiaoben

Found 4 papers, 1 papers with code

Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

1 code implementation12 Jun 2022 Chengyang Ying, You Qiaoben, Xinning Zhou, Hang Su, Wenbo Ding, Jianyong Ai

Among different adversarial noises, universal adversarial perturbations (UAP), i. e., a constant image-agnostic perturbation applied on every input frame of the agent, play a critical role in Embodied Vision Navigation since they are computation-efficient and application-practical during the attack.

Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning

no code implementations30 Jun 2021 You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang

In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.

reinforcement-learning Reinforcement Learning (RL)

Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents

no code implementations ICML Workshop AML 2021 You Qiaoben, Xinning Zhou, Chengyang Ying, Jun Zhu

Deep reinforcement learning (DRL) policies are vulnerable to the adversarial attack on their observations, which may mislead real-world RL agents to catastrophic failures.

Adversarial Attack Continuous Control +2

Composite Binary Decomposition Networks

no code implementations16 Nov 2018 You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu

Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts.

General Classification Image Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.