Search Results for author: Shuyu Cheng

Found 7 papers, 4 papers with code

Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds

no code implementations ECCV 2020 Yueru Li, Shuyu Cheng, Hang Su, Jun Zhu

Based on our investigation, we further present a new robust learning algorithm which encourages a larger gradient component in the tangent space of data manifold, suppressing the gradient leaking phenomenon consequently.

Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior

1 code implementation13 Mar 2022 Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu

However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.

On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms

1 code implementation NeurIPS 2021 Shuyu Cheng, Guoqiang Wu, Jun Zhu

Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.

Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial Attacks

no code implementations15 Sep 2020 Chen Ma, Shuyu Cheng, Li Chen, Jun Zhu, Junhai Yong

In each iteration, SWITCH first tries to update the current sample along the direction of $\hat{\mathbf{g}}$, but considers switching to its opposite direction $-\hat{\mathbf{g}}$ if our algorithm detects that it does not increase the value of the attack objective function.

Adversarial Attack

A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models

1 code implementation pproximateinference AABI Symposium 2019 Ziyu Wang, Shuyu Cheng, Yueru Li, Jun Zhu, Bo Zhang

Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative.

Improving Black-box Adversarial Attacks with a Transfer-based Prior

2 code implementations NeurIPS 2019 Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.

Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference

no code implementations29 Mar 2018 Zhize Li, Tianyi Zhang, Shuyu Cheng, Jun Zhu, Jian Li

In this paper, we apply the variance reduction tricks on Hamiltonian Monte Carlo and achieve better theoretical convergence results compared with the variance-reduced Langevin dynamics.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.