1 code implementation • 26 Mar 2024 • Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Zhiping Lin
The compensation stream is governed by a Dual-Activation Compensation (DAC) module.
1 code implementation • CVPR 2023 • Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, Ziqian Zeng
In this paper, we approach the FSCIL by adopting analytic learning, a technique that converts network training into linear problems.
1 code implementation • 30 May 2022 • Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, Zhiping Lin
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively.
no code implementations • 18 Mar 2022 • Kar-Ann Toh, Giuseppe Molteni, Zhiping Lin
While the primal form is suitable for problems of low dimension with large data samples, the dual form is suitable for problems of high dimension but with a small number of data samples.
no code implementations • 14 Feb 2022 • Huiping Zhuang, Zhiping Lin, Yimin Yang, Kar-Ann Toh
Training convolutional neural networks (CNNs) with back-propagation (BP) is time-consuming and resource-intensive particularly in view of the need to visit the dataset multiple times.
no code implementations • 16 Feb 2021 • Zhao Kang, Zhiping Lin, Xiaofeng Zhu, Wenbo Xu
Extensive experiments demonstrate the efficiency and effectiveness of our approach with respect to many state-of-the-art clustering methods.
no code implementations • 3 Dec 2020 • Huiping Zhuang, Zhiping Lin, Kar-Ann Toh
Decoupled learning is a branch of model parallelism which parallelizes the training of a network by splitting it depth-wise into multiple modules.
no code implementations • 4 Feb 2020 • Wenyang Hu, Xiaocong Cai, Jun Hou, Shuai Yi, Zhiping Lin
Extensive experiments on standard benchmarks demonstrate that our end-to-end model achieves a new state-of-the-art for regular and irregular scene text recognition and needs 6 times shorter inference time than attentionbased methods.
1 code implementation • 21 Jun 2019 • Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin
Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.
no code implementations • 27 Oct 2018 • Kar-Ann Toh, Zhiping Lin, Zhengguo Li, Beomseok Oh, Lei Sun
In this article, we show that solving the system of linear equations by manipulating the kernel and the range space is equivalent to solving the problem of least squares error approximation.
no code implementations • 9 Jun 2018 • Kar-Ann Toh, Lei Sun, Zhiping Lin
An extension of the regularized least-squares in which the estimation parameters are stretchable is introduced and studied in this paper.
no code implementations • 6 Oct 2016 • Siyuan Peng, Badong Chen, Lei Sun, Zhiping Lin, Wee Ser
Most existing constrained adaptive filtering algorithms are developed under mean square error (MSE) criterion, which is an ideal optimality criterion under Gaussian noises.