no code implementations • 18 Nov 2019 • Jacob Rafati, David C. Noelle
Efficient exploration for automatic subgoal discovery is a challenging problem in Hierarchical Reinforcement Learning (HRL).
Efficient Exploration Hierarchical Reinforcement Learning +2
no code implementations • 4 Sep 2019 • Jacob Rafati, Roummel F. Marcia
Quasi-Newton methods, like SGD, require only first-order gradient information, but they can result in superlinear convergence, which makes them attractive alternatives to SGD.
no code implementations • 4 Sep 2019 • Jacob Rafati, David C. Noelle
This has motivated methods that learn internal representations of the agent's state, effectively reducing the size of the state space and restructuring state representations in order to support generalization.
no code implementations • 6 Nov 2018 • Jacob Rafati, Roummel F. Marcia
Deep Reinforcement Learning algorithms require solving a nonconvex and nonlinear unconstrained optimization problem.
no code implementations • 23 Oct 2018 • Jacob Rafati, David C. Noelle
When combined with an intrinsic motivation learning mechanism, this method learns both subgoals and skills, based on experiences in the environment.