Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

11 Mar 2020  ·  Naeem Paeedeh, Kamaledin Ghiasi-Shirazi ·

Many attempts took place to improve the adaptive filters that can also be useful to improve backpropagation (BP). Normalized least mean squares (NLMS) is one of the most successful algorithms derived from Least mean squares (LMS). However, its extension to multi-layer neural networks has not happened before. Here, we first show that it is possible to consider a multi-layer neural network as a stack of adaptive filters. Additionally, we introduce more comprehensible interpretations of NLMS than the complicated geometric interpretation in affine projection algorithm (APA) for a single fully-connected (FC) layer that can easily be generalized to, for instance, convolutional neural networks and also works better with mini-batch training. With this new viewpoint, we introduce a better algorithm by predicting then emending the adverse consequences of the actions that take place in BP even before they happen. Finally, the proposed method is compatible with stochastic gradient descent (SGD) and applicable to momentum-based derivatives such as RMSProp, Adam, and NAG. Our experiments show the usefulness of our algorithm in the training of deep neural networks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods