Search Results for author: Khimya Khetarpal

Found 16 papers, 11 papers with code

Disentangling the Causes of Plasticity Loss in Neural Networks

no code implementations29 Feb 2024 Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney

Underpinning the past decades of work on the design, initialization, and optimization of neural networks is a seemingly innocuous assumption: that the network is trained on a \textit{stationary} data distribution.

Atari Games reinforcement-learning

Toward Human-AI Alignment in Large-Scale Multi-Player Games

no code implementations5 Feb 2024 Sugandha Sharma, Guy Davidson, Khimya Khetarpal, Anssi Kanervisto, Udit Arora, Katja Hofmann, Ida Momennejad

First, we analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games), uncovering behavioral patterns in a complex task space.

Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels

no code implementations16 Oct 2023 Thomas Jiralerspong, Flemming Kondrup, Doina Precup, Khimya Khetarpal

The ability to plan at many different levels of abstraction enables agents to envision the long-term repercussions of their decisions and thus enables sample-efficient learning.

Hierarchical Reinforcement Learning

Discovering Object-Centric Generalized Value Functions From Pixels

1 code implementation27 Apr 2023 Somjit Nath, Gopeshh Raaj Subbaraj, Khimya Khetarpal, Samira Ebrahimi Kahou

Deep Reinforcement Learning has shown significant progress in extracting useful representations from high-dimensional inputs albeit using hand-crafted auxiliary tasks and pseudo rewards.

Object

POMRL: No-Regret Learning-to-Plan with Increasing Horizons

no code implementations30 Dec 2022 Khimya Khetarpal, Claire Vernade, Brendan O'Donoghue, Satinder Singh, Tom Zahavy

We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task.

Meta Reinforcement Learning Reinforcement Learning (RL)

The Paradox of Choice: Using Attention in Hierarchical Reinforcement Learning

1 code implementation24 Jan 2022 Andrei Nica, Khimya Khetarpal, Doina Precup

Decision-making AI agents are often faced with two important challenges: the depth of the planning horizon, and the branching factor due to having many choices.

Decision Making Hierarchical Reinforcement Learning +2

Temporally Abstract Partial Models

1 code implementation NeurIPS 2021 Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, Doina Precup

Humans and animals have the ability to reason and make predictions about different courses of action at many time scales.

Variance Penalized On-Policy and Off-Policy Actor-Critic

1 code implementation3 Feb 2021 Arushi Jain, Gandharv Patil, Ayush Jain, Khimya Khetarpal, Doina Precup

Reinforcement learning algorithms are typically geared towards optimizing the expected return of an agent.

Towards Continual Reinforcement Learning: A Review and Perspectives

no code implementations25 Dec 2020 Khimya Khetarpal, Matthew Riemer, Irina Rish, Doina Precup

In this article, we aim to provide a literature review of different formulations and approaches to continual reinforcement learning (RL), also known as lifelong or non-stationary RL.

Continual Learning reinforcement-learning +1

Learning Robust State Abstractions for Hidden-Parameter Block MDPs

2 code implementations ICLR 2021 Amy Zhang, Shagun Sodhani, Khimya Khetarpal, Joelle Pineau

Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work that use the same environment assumptions.

Generalization Bounds Meta Reinforcement Learning +2

What can I do here? A Theory of Affordances in Reinforcement Learning

1 code implementation ICML 2020 Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, Doina Precup

Gibson (1977) coined the term "affordances" to describe the fact that certain states enable an agent to do certain actions, in the context of embodied agents.

reinforcement-learning Reinforcement Learning (RL)

Options of Interest: Temporal Abstraction with Interest Functions

3 code implementations1 Jan 2020 Khimya Khetarpal, Martin Klissarov, Maxime Chevalier-Boisvert, Pierre-Luc Bacon, Doina Precup

Temporal abstraction refers to the ability of an agent to use behaviours of controllers which act for a limited, variable amount of time.

Environments for Lifelong Reinforcement Learning

2 code implementations26 Nov 2018 Khimya Khetarpal, Shagun Sodhani, Sarath Chandar, Doina Precup

To achieve general artificial intelligence, reinforcement learning (RL) agents should learn not only to optimize returns for one specific task but also to constantly build more complex skills and scaffold their knowledge about the world, without forgetting what has already been learned.

reinforcement-learning Reinforcement Learning (RL)

Attend Before you Act: Leveraging human visual attention for continual learning

1 code implementation25 Jul 2018 Khimya Khetarpal, Doina Precup

When humans perform a task, such as playing a game, they selectively pay attention to certain parts of the visual input, gathering relevant information and sequentially combining it to build a representation from the sensory data.

Continual Learning Decision Making +1

Safe Option-Critic: Learning Safety in the Option-Critic Architecture

1 code implementation21 Jul 2018 Arushi Jain, Khimya Khetarpal, Doina Precup

We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency.

Atari Games Hierarchical Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.