no code implementations • ICML 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • 12 Nov 2023 • Shengkun Zhu, Jinshan Zeng, Sheng Wang, Yuan Sun, Zhiyong Peng
Our experiments validate that FLAME, when trained on heterogeneous data, outperforms state-of-the-art methods in terms of model performance.
no code implementations • 26 Nov 2022 • Jie zhou, Yefei Wang, Yiyang Yuan, Qing Huang, Jinshan Zeng
Numerical results show that the mode collapse issue suffered by the known CycleGAN can be effectively alleviated by equipping with the proposed SGCE module, and the CycleGAN equipped with SGCE outperforms the state-of-the-art models in terms of four important evaluation metrics and visualization quality.
no code implementations • 11 Nov 2022 • Jinshan Zeng, Yefei Wang, Qi Chen, Yunxin Liu, Mingwen Wang, Yuan YAO
The effectiveness of the proposed model for the zero-shot traditional Chinese font generation is also evaluated in this paper.
no code implementations • 16 Oct 2022 • Jinshan Zeng, Ruiying Xu, Yu Wu, Hongwei Li, Jiaxing Lu
The proposed method consists of a training stage and an inference stage.
1 code implementation • 13 Sep 2022 • Ke Ma, Qianqian Xu, Jinshan Zeng, Guorong Li, Xiaochun Cao, Qingming Huang
From the perspective of the dynamical system, the attack behavior with a target ranking list is a fixed point belonging to the composition of the adversary and the victim.
1 code implementation • 11 Jun 2022 • Yunxin Liu, Qiaosi Yi, Jinshan Zeng
Besides the lightweight models, we also show that the suggested review mechanism can be used as a plug-and-play module to further boost the performance of a kind of heavy crowd counting models without modifying the neural network architecture and introducing any additional model parameter.
no code implementations • 29 Sep 2021 • Chang Wan, Yanwei Fu, Ke Fan, Jinshan Zeng, Ming Zhong, Riheng Jia, MingLu Li, ZhongLong Zheng
However, the discriminator using logistic regression from the CFG framework is gradually hard to discriminate between real and fake images while the training steps go on.
1 code implementation • 5 Jul 2021 • Ke Ma, Qianqian Xu, Jinshan Zeng, Xiaochun Cao, Qingming Huang
In this paper, to the best of our knowledge, we initiate the first systematic investigation of data poisoning attacks on pairwise ranking algorithms, which can be formalized as the dynamic and static games between the ranker and the attacker and can be modeled as certain kinds of integer programming problems.
no code implementations • 21 Jan 2021 • Jinshan Zeng, Wotao Yin, Ding-Xuan Zhou
We modify ALM to use a Moreau envelope of the augmented Lagrangian and establish its convergence under conditions that are weaker than those in the literature.
Optimization and Control
no code implementations • 1 Jan 2021 • Jinshan Zeng, Yixuan Zha, Ke Ma, Yuan YAO
In this paper, we fill this gap via exploiting a new semi-stochastic variant of the original SVRG with Option I adapted to the semidefinite optimization.
1 code implementation • 16 Dec 2020 • Jinshan Zeng, Qi Chen, Yunxin Liu, Mingwen Wang, Yuan YAO
However, these deep generative models may suffer from the mode collapse issue, which significantly degrades the diversity and quality of generated results.
1 code implementation • 4 Jul 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
no code implementations • 1 Apr 2020 • Jinshan Zeng, Min Zhang, Shao-Bo Lin
Boosting is a well-known method for improving the accuracy of weak learners in machine learning.
no code implementations • 1 Dec 2019 • Ke Ma, Jinshan Zeng, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO
Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.
1 code implementation • 24 Nov 2019 • Jinshan Zeng, Minrun Wu, Shao-Bo Lin, Ding-Xuan Zhou
In the era of big data, it is desired to develop efficient machine learning algorithms to tackle massive data challenges such as storage bottleneck, algorithmic scalability, and interpretability.
no code implementations • 25 Sep 2019 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • 23 May 2019 • Yanwei Fu, Chen Liu, Donghao Li, Zuyuan Zhong, Xinwei Sun, Jinshan Zeng, Yuan YAO
To fill in this gap, this paper proposes a new approach based on differential inclusions of inverse scale spaces, which generate a family of models from simple to complex ones along the dynamics via coupling a pair of parameters, such that over-parameterized deep models and their structural sparsity can be explored simultaneously.
1 code implementation • 6 Feb 2019 • Jinshan Zeng, Shao-Bo Lin, Yuan YAO, Ding-Xuan Zhou
In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called \textit{sigmoid-ADMM pair}), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation.
no code implementations • 24 Mar 2018 • Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao
Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.
2 code implementations • 1 Mar 2018 • Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO
Deep learning has aroused extensive attention due to its great empirical success.
1 code implementation • 17 Nov 2017 • Ke Ma, Jinshan Zeng, Jiechao Xiong, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO
Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.
no code implementations • 28 Feb 2017 • Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang
This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.
no code implementations • 30 Apr 2016 • Shaobo Lin, Jinshan Zeng, Xiaoqin Zhang
In this paper, we aim at developing scalable neural network-type learning systems.
no code implementations • 20 Apr 2016 • Lin Xu, Shao-Bo Lin, Jinshan Zeng, Xia Liu, Zongben Xu
In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning.
no code implementations • 7 Mar 2015 • Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng
On one hand, based on the worst-case learning rate analysis, we show that the regularization term in polynomial kernel regression is not necessary.
no code implementations • 13 Nov 2014 • Lin Xu, Shaobo Lin, Jinshan Zeng, Zongben Xu
Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a new atom from a dictionary via the steepest gradient descent and build the estimator via orthogonal projecting the target function to the space spanned by the selected atoms in each greedy step.
no code implementations • 19 Dec 2013 • Shaobo Lin, Jinshan Zeng, Jian Fang, Zongben Xu
Regularization is a well recognized powerful strategy to improve the performance of a learning machine and $l^q$ regularization schemes with $0<q<\infty$ are central in use.