Search Results for author: Qing Tao

Found 5 papers, 1 papers with code

Semismooth Newton Algorithm for Efficient Projections onto $\ell_{1, \infty}$-norm Ball

1 code implementation ICML 2020 Dejun Chu, Chang-Shui Zhang, Shiliang Sun, Qing Tao

Structured sparsity-inducing $\ell_{1, \infty}$-norm, as a generalization of the classical $\ell_1$-norm, plays an important role in jointly sparse models which select or remove simultaneously all the variables forming a group.

Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks

no code implementations27 Jan 2023 Wei Tao, Lei Bao, Sheng Long, Gaowei Wu, Qing Tao

However, for solving this induced optimization problem, the state-of-the-art gradient-based methods such as FGSM, I-FGSM and MI-FGSM look different from their original methods especially in updating the direction, which makes it difficult to understand them and then leaves some theoretical issues to be addressed in viewpoint of optimization.

The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-ball Methods

no code implementations ICLR 2021 Wei Tao, Sheng Long, Gaowei Wu, Qing Tao

In this paper, we fill this theory-practice gap by investigating the convergence of the last iterate (referred to as individual convergence), which is a more difficult task than convergence analysis of the averaged solution.

Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex Optimization

no code implementations29 Dec 2020 Wei Tao, Wei Li, Zhisong Pan, Qing Tao

In order to remove this factor, we first develop gradient descent averaging (GDA), which is a general projection-based dual averaging algorithm in the strongly convex setting.

Avoiding False Positive in Multi-Instance Learning

no code implementations NeurIPS 2010 Yanjun Han, Qing Tao, Jue Wang

In multi-instance learning, there are two kinds of prediction failure, i. e., false negative and false positive.

Cannot find the paper you are looking for? You can Submit a new open access paper.