Paper

A Socially Aware Reinforcement Learning Agent for The Single Track Road Problem

We present the single track road problem. In this problem two agents face each-other at opposite positions of a road that can only have one agent pass at a time. We focus on the scenario in which one agent is human, while the other is an autonomous agent. We run experiments with human subjects in a simple grid domain, which simulates the single track road problem. We show that when data is limited, building an accurate human model is very challenging, and that a reinforcement learning agent, which is based on this data, does not perform well in practice. However, we show that an agent that tries to maximize a linear combination of the human's utility and its own utility, achieves a high score, and significantly outperforms other baselines, including an agent that tries to maximize only its own utility.

Results in Papers With Code
(↓ scroll down to see all results)