Non-iterative recomputation of dense layers for performance improvement of DCNN

14 Sep 2018  ·  Yimin Yang, Q. M. Jonathan Wu, Xiexing Feng, Thangarajah Akilan ·

An iterative method of learning has become a paradigm for training deep convolutional neural networks (DCNN). However, utilizing a non-iterative learning strategy can accelerate the training process of the DCNN and surprisingly such approach has been rarely explored by the deep learning (DL) community. It motivates this paper to introduce a non-iterative learning strategy that eliminates the backpropagation (BP) at the top dense or fully connected (FC) layers of DCNN, resulting in, lower training time and higher performance. The proposed method exploits the Moore-Penrose Inverse to pull back the current residual error to each FC layer, generating well-generalized features. Then using the recomputed features, i.e., the new generalized features the weights of each FC layer is computed according to the Moore-Penrose Inverse. We evaluate the proposed approach on six widely accepted object recognition benchmark datasets: Scene-15, CIFAR-10, CIFAR-100, SUN-397, Places365, and ImageNet. The experimental results show that the proposed method obtains significant improvements over 30 state-of-the-art methods. Interestingly, it also indicates that any DCNN with the proposed method can provide better performance than the same network with its original training based on BP.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods