no code implementations • ECCV 2020 • Yueru Li, Shuyu Cheng, Hang Su, Jun Zhu
Based on our investigation, we further present a new robust learning algorithm which encourages a larger gradient component in the tangent space of data manifold, suppressing the gradient leaking phenomenon consequently.
1 code implementation • 13 Mar 2022 • Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu
However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.
1 code implementation • NeurIPS 2021 • Shuyu Cheng, Guoqiang Wu, Jun Zhu
Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
no code implementations • 15 Sep 2020 • Chen Ma, Shuyu Cheng, Li Chen, Jun Zhu, Junhai Yong
In each iteration, SWITCH first tries to update the current sample along the direction of $\hat{\mathbf{g}}$, but considers switching to its opposite direction $-\hat{\mathbf{g}}$ if our algorithm detects that it does not increase the value of the attack objective function.
1 code implementation • pproximateinference AABI Symposium 2019 • Ziyu Wang, Shuyu Cheng, Yueru Li, Jun Zhu, Bo Zhang
Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative.
2 code implementations • NeurIPS 2019 • Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.
no code implementations • 29 Mar 2018 • Zhize Li, Tianyi Zhang, Shuyu Cheng, Jun Zhu, Jian Li
In this paper, we apply the variance reduction tricks on Hamiltonian Monte Carlo and achieve better theoretical convergence results compared with the variance-reduced Langevin dynamics.