Signal Processing Meets SGD: From Momentum to Filter

6 Nov 2023  ·  Zhipeng Yao, Yu Zhang, Dazhou Li ·

In deep learning, stochastic gradient descent (SGD) and its momentum-based variants are widely used in optimization algorithms, they usually face the problem of slow convergence. Meanwhile, existing adaptive learning rate optimizers accelerate convergence but often at the expense of generalization ability. We demonstrate that the adaptive learning rate property impairs generalization. To address this contradiction, we propose a novel optimization method that aims to accelerate the convergence rate of SGD without loss of generalization. This approach is based on the idea of reducing the variance of the historical gradient, enhancing the first-order moment estimation of the SGD by applying Wiener filtering theory, and introducing a time-varying adaptive weight. Experimental results show that SGDF achieves a trade-off between convergence and generalization compared to state-of-the-art optimizers.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods