Towards Deep Robot Learning with Optimizer applicable to Non-stationary Problems

31 Jul 2020  ·  Taisuke Kobayashi ·

This paper proposes a new optimizer for deep learning, named d-AmsGrad. In the real-world data, noise and outliers cannot be excluded from dataset to be used for learning robot skills. This problem is especially striking for robots that learn by collecting data in real time, which cannot be sorted manually. Several noise-robust optimizers have therefore been developed to resolve this problem, and one of them, named AmsGrad, which is a variant of Adam optimizer, has a proof of its convergence. However, in practice, it does not improve learning performance in robotics scenarios. This reason is hypothesized that most of robot learning problems are non-stationary, but AmsGrad assumes the maximum second momentum during learning to be stationarily given. In order to adapt to the non-stationary problems, an improved version, which slowly decays the maximum second momentum, is proposed. The proposed optimizer has the same capability of reaching the global optimum as baselines, and its performance outperformed that of the baselines in robotics problems.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods