1 code implementation • ICML 2020 • Dejun Chu, Chang-Shui Zhang, Shiliang Sun, Qing Tao
Structured sparsity-inducing $\ell_{1, \infty}$-norm, as a generalization of the classical $\ell_1$-norm, plays an important role in jointly sparse models which select or remove simultaneously all the variables forming a group.
no code implementations • 27 Jan 2023 • Wei Tao, Lei Bao, Sheng Long, Gaowei Wu, Qing Tao
However, for solving this induced optimization problem, the state-of-the-art gradient-based methods such as FGSM, I-FGSM and MI-FGSM look different from their original methods especially in updating the direction, which makes it difficult to understand them and then leaves some theoretical issues to be addressed in viewpoint of optimization.
no code implementations • ICLR 2021 • Wei Tao, Sheng Long, Gaowei Wu, Qing Tao
In this paper, we fill this theory-practice gap by investigating the convergence of the last iterate (referred to as individual convergence), which is a more difficult task than convergence analysis of the averaged solution.
no code implementations • 29 Dec 2020 • Wei Tao, Wei Li, Zhisong Pan, Qing Tao
In order to remove this factor, we first develop gradient descent averaging (GDA), which is a general projection-based dual averaging algorithm in the strongly convex setting.
no code implementations • NeurIPS 2010 • Yanjun Han, Qing Tao, Jue Wang
In multi-instance learning, there are two kinds of prediction failure, i. e., false negative and false positive.