Neural Networks Fail to Learn Periodic Functions and How to Fix It

NeurIPS 2020  ·  Liu Ziyin, Tilman Hartwig, Masahito Ueda ·

Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a "periodic" inductive bias. As a fix of this problem, we propose a new activation, namely, $x + \sin^2(x)$, which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the ReLU-based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Datasets


  Add Datasets introduced or used in this paper

Reproducibility Reports


Jan 31 2021
[Re] Neural Networks Fail to Learn Periodic Functions and How to Fix It

We were able to successfully replicate experiments supporting the central claim of the paper, that the proposed snake non-linearity can learn periodic functions. We also analyze the suitability of the snake activation for other tasks like generative modeling and sentiment analysis.

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods