On the Feedback Law in Stochastic Optimal Nonlinear Control

1 Apr 2020  ·  Mohamed Naveed Gul Mohamed, Suman Chakravorty, Raman Goyal, Ran Wang ·

We consider the problem of nonlinear stochastic optimal control. This problem is thought to be fundamentally intractable owing to Bellman's ``curse of dimensionality". We present a result that shows that repeatedly solving an open-loop deterministic problem from the current state with progressively shorter horizons, similar to Model Predictive Control (MPC), results in a feedback policy that is $O(\epsilon^4)$ near to the true global stochastic optimal policy, \nxx{where $\epsilon$ is a perturbation parameter modulating the noise.} We show that the optimal deterministic feedback problem has a perturbation structure in that higher-order terms of the feedback law do not affect lower-order terms, and that this structure is lost in the optimal stochastic feedback problem. Consequently, solving the Stochastic Dynamic Programming problem is highly susceptible to noise, even when tractable, and in practice, the MPC-type feedback law offers superior performance even for stochastic systems.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here