Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source at https://github.com/facebookresearch/nle.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Datasets


Introduced in the Paper:

NetHack Learning Environment

Used in the Paper:

ProcGen Obstacle Tower
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
NetHack Score NetHack Learning Environment RND-mon-hum-neu-mal Average Score 780 # 1

Methods


No methods listed for this paper. Add relevant methods here