Towards Scalable Koopman Operator Learning: Convergence Rates and A Distributed Learning Algorithm

30 Sep 2019  ·  Zhiyuan Liu, Guohui Ding, Lijun Chen, Enoch Yeung ·

We propose an alternating optimization algorithm to the nonconvex Koopman operator learning problem for nonlinear dynamic systems. We show that the proposed algorithm will converge to a critical point with rate $O(1/T)$ and $O(\frac{1}{\log T})$ for the constant and diminishing learning rates, respectively, under some mild conditions. To cope with the high dimensional nonlinear dynamical systems, we present the first-ever distributed Koopman operator learning algorithm. We show that the distributed Koopman operator learning has the same convergence properties as the centralized Koopman operator learning, in the absence of optimal tracker, so long as the basis functions satisfy a set of state-based decomposition conditions. Numerical experiments are provided to complement our theoretical results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here