Search Results for author: Jinshan Zeng

Found 28 papers, 11 papers with code

dS^2LBI: Exploring Structural Sparsity on Deep Network via Differential Inclusion Paths

no code implementations ICML 2020 Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO

Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.

Personalized Federated Learning via ADMM with Moreau Envelope

1 code implementation12 Nov 2023 Shengkun Zhu, Jinshan Zeng, Sheng Wang, Yuan Sun, Zhiyong Peng

Our experiments validate that FLAME, when trained on heterogeneous data, outperforms state-of-the-art methods in terms of model performance.

Personalized Federated Learning

SGCE-Font: Skeleton Guided Channel Expansion for Chinese Font Generation

no code implementations26 Nov 2022 Jie zhou, Yefei Wang, Yiyang Yuan, Qing Huang, Jinshan Zeng

Numerical results show that the mode collapse issue suffered by the known CycleGAN can be effectively alleviated by equipping with the proposed SGCE module, and the CycleGAN equipped with SGCE outperforms the state-of-the-art models in terms of four important evaluation metrics and visualization quality.

Font Generation

StrokeGAN+: Few-Shot Semi-Supervised Chinese Font Generation with Stroke Encoding

no code implementations11 Nov 2022 Jinshan Zeng, Yefei Wang, Qi Chen, Yunxin Liu, Mingwen Wang, Yuan YAO

The effectiveness of the proposed model for the zero-shot traditional Chinese font generation is also evaluated in this paper.

Font Generation

A Tale of HodgeRank and Spectral Method: Target Attack Against Rank Aggregation Is the Fixed Point of Adversarial Game

1 code implementation13 Sep 2022 Ke Ma, Qianqian Xu, Jinshan Zeng, Guorong Li, Xiaochun Cao, Qingming Huang

From the perspective of the dynamical system, the attack behavior with a target ranking list is a fixed point belonging to the composition of the adversary and the victim.

Information Retrieval Retrieval

Reducing Capacity Gap in Knowledge Distillation with Review Mechanism for Crowd Counting

1 code implementation11 Jun 2022 Yunxin Liu, Qiaosi Yi, Jinshan Zeng

Besides the lightweight models, we also show that the suggested review mechanism can be used as a plug-and-play module to further boost the performance of a kind of heavy crowd counting models without modifying the neural network architecture and introducing any additional model parameter.

Computational Efficiency Crowd Counting +1

An Improved Composite Functional Gradient Learning by Wasserstein Regularization for Generative adversarial networks

no code implementations29 Sep 2021 Chang Wan, Yanwei Fu, Ke Fan, Jinshan Zeng, Ming Zhong, Riheng Jia, MingLu Li, ZhongLong Zheng

However, the discriminator using logistic regression from the CFG framework is gradually hard to discriminate between real and fake images while the training steps go on.

Image Generation regression

Poisoning Attack against Estimating from Pairwise Comparisons

1 code implementation5 Jul 2021 Ke Ma, Qianqian Xu, Jinshan Zeng, Xiaochun Cao, Qingming Huang

In this paper, to the best of our knowledge, we initiate the first systematic investigation of data poisoning attacks on pairwise ranking algorithms, which can be formalized as the dynamic and static games between the ranker and the attacker and can be modeled as certain kinds of integer programming problems.

Data Poisoning

Moreau Envelope Augmented Lagrangian Method for Nonconvex Optimization with Linear Constraints

no code implementations21 Jan 2021 Jinshan Zeng, Wotao Yin, Ding-Xuan Zhou

We modify ALM to use a Moreau envelope of the augmented Lagrangian and establish its convergence under conditions that are weaker than those in the literature.

Optimization and Control

On Stochastic Variance Reduced Gradient Method for Semidefinite Optimization

no code implementations1 Jan 2021 Jinshan Zeng, Yixuan Zha, Ke Ma, Yuan YAO

In this paper, we fill this gap via exploiting a new semi-stochastic variant of the original SVRG with Option I adapted to the semidefinite optimization.

Computational Efficiency

StrokeGAN: Reducing Mode Collapse in Chinese Font Generation via Stroke Encoding

1 code implementation16 Dec 2020 Jinshan Zeng, Qi Chen, Yunxin Liu, Mingwen Wang, Yuan YAO

However, these deep generative models may suffer from the mode collapse issue, which significantly degrades the diversity and quality of generated results.

Font Generation

DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths

1 code implementation4 Jul 2020 Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO

Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.

Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive Step Size

no code implementations1 Dec 2019 Ke Ma, Jinshan Zeng, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO

Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.

Fast Polynomial Kernel Classification for Massive Data

1 code implementation24 Nov 2019 Jinshan Zeng, Minrun Wu, Shao-Bo Lin, Ding-Xuan Zhou

In the era of big data, it is desired to develop efficient machine learning algorithms to tackle massive data challenges such as storage bottleneck, algorithmic scalability, and interpretability.

Classification General Classification

Split LBI for Deep Learning: Structural Sparsity via Differential Inclusion Paths

no code implementations25 Sep 2019 Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO

Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.

Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces

1 code implementation23 May 2019 Yanwei Fu, Chen Liu, Donghao Li, Zuyuan Zhong, Xinwei Sun, Jinshan Zeng, Yuan YAO

To fill in this gap, this paper proposes a new approach based on differential inclusions of inverse scale spaces, which generate a family of models from simple to complex ones along the dynamics via coupling a pair of parameters, such that over-parameterized deep models and their structural sparsity can be explored simultaneously.

On ADMM in Deep Learning: Convergence and Saturation-Avoidance

1 code implementation6 Feb 2019 Jinshan Zeng, Shao-Bo Lin, Yuan YAO, Ding-Xuan Zhou

In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called \textit{sigmoid-ADMM pair}), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation.

A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training

no code implementations24 Mar 2018 Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao

Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.

Global Convergence of Block Coordinate Descent in Deep Learning

2 code implementations1 Mar 2018 Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO

Deep learning has aroused extensive attention due to its great empirical success.

Stochastic Non-convex Ordinal Embedding with Stabilized Barzilai-Borwein Step Size

1 code implementation17 Nov 2017 Ke Ma, Jinshan Zeng, Jiechao Xiong, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO

Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.

Learning rates for classification with Gaussian kernels

no code implementations28 Feb 2017 Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.

Binary Classification Classification +2

Constructive neural network learning

no code implementations30 Apr 2016 Shaobo Lin, Jinshan Zeng, Xiaoqin Zhang

In this paper, we aim at developing scalable neural network-type learning systems.

Greedy Criterion in Orthogonal Greedy Learning

no code implementations20 Apr 2016 Lin Xu, Shao-Bo Lin, Jinshan Zeng, Xia Liu, Zongben Xu

In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning.

Model selection of polynomial kernel regression

no code implementations7 Mar 2015 Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng

On one hand, based on the worst-case learning rate analysis, we show that the regularization term in polynomial kernel regression is not necessary.

Model Selection regression

Greedy metrics in orthogonal greedy learning

no code implementations13 Nov 2014 Lin Xu, Shaobo Lin, Jinshan Zeng, Zongben Xu

Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a new atom from a dictionary via the steepest gradient descent and build the estimator via orthogonal projecting the target function to the space spanned by the selected atoms in each greedy step.

Model Selection

Learning rates of $l^q$ coefficient regularization learning with Gaussian kernel

no code implementations19 Dec 2013 Shaobo Lin, Jinshan Zeng, Jian Fang, Zongben Xu

Regularization is a well recognized powerful strategy to improve the performance of a learning machine and $l^q$ regularization schemes with $0<q<\infty$ are central in use.

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.