Feudal Graph Reinforcement Learning

11 Apr 2023  ·  Tommaso Marzi, Arshjot Khehra, Andrea Cini, Cesare Alippi ·

Graph-based representations and weight-sharing modular policies constitute prominent approaches to tackling composable control problems in Reinforcement Learning (RL). However, as shown by recent graph deep learning literature, message-passing operators can create bottlenecks in information propagation and hinder global coordination. The issue becomes dramatic in tasks where high-level planning is needed. In this work, we propose a novel methodology, named Feudal Graph Reinforcement Learning (FGRL), that addresses such challenges by relying on hierarchical RL and a pyramidal message-passing architecture. In particular, FGRL defines a hierarchy of policies where high-level commands are propagated from the top of the hierarchy down through a layered graph structure. The bottom layers mimic the morphology of the physical system, while the upper layers capture more abstract sub-modules. The resulting agents are then characterized by a committee of policies where actions at a certain level set goals for the level below, thus implementing a hierarchical decision-making structure that encompasses task decomposition. We evaluate the proposed framework on locomotion tasks on benchmark MuJoCo environments and show that FGRL compares favorably against relevant baselines. Furthermore, an in-depth analysis of the command propagation mechanism provides evidence that the introduced message-passing scheme favors the learning of hierarchical decision-making policies.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here