Towards Adaptive Residual Network Training: A Neural-ODE Perspective

ICML 2020  ·  chengyu dong, Liyuan Liu, Zichao Li, Jingbo Shang ·

Serving as a crucial factor, the depth of residual networks balances model capacity, performance, and training efficiency. However, depth has been long fixed as a hyper-parameter and needs laborious tuning, due to the lack of theories describing its dynamics. Here, we conduct theoretical analysis on network depth and introduce adaptive residual network training, which gradually increases model depth during training. Specifically, from an ordinary differential equation perspective, we describe the effect of depth growth with embedded errors, characterize the impact of model depth with truncation errors, and derive bounds for them. Illuminated by these derivations, we propose an adaptive training algorithm for residual networks, LipGrow, which automatically increases network depth and accelerates model training. In our experiments, it achieves better or comparable performance while reducing ~50% of training time.

PDF ICML 2020 PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here