Biomechanic Posture Stabilisation via Iterative Training of Multi-policy Deep Reinforcement Learning Agents

21 Aug 2020  ·  Mohammed Hossny, Julie Iskander ·

It is not until we become senior citizens do we recognise how much we took maintaining a simple standing posture for granted. It is truly fascinating to observe the magnitude of control the human brain exercises, in real time, to activate and deactivate the lower body muscles and solve a multi-link 3D inverted pendulum problem in order to maintain a stable standing posture. This realisation is even more apparent when training an artificial intelligence (AI) agent to maintain a standing posture of a digital musculoskeletal avatar due to the error propagation problem. In this work we address the error propagation problem by introducing an iterative training procedure for deep reinforcement learning which allows the agent to learn a finite set of actions and how to coordinate between them in order to achieve a stable standing posture. The proposed training approach allowed the agent to increase standing duration from 4 seconds using the traditional training method to 348 seconds using the proposed method. The proposed training method allowed the agent to generalise and accommodate perception and actuation noise for almost 108 seconds.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here