Search Results for author: Richard Sutton

Found 6 papers, 2 papers with code

MetaOptimize: A Framework for Optimizing Step Sizes and Other Meta-parameters

no code implementations4 Feb 2024 Arsalan SharifNassab, Saber Salehkaleybar, Richard Sutton

This paper addresses the challenge of optimizing meta-parameters (i. e., hyperparameters) in machine learning algorithms, a critical factor influencing training efficiency and model performance.

Step-size Optimization for Continual Learning

no code implementations30 Jan 2024 Thomas Degris, Khurram Javed, Arsalan SharifNassab, Yuxin Liu, Richard Sutton

We conclude by suggesting that combining both approaches could be a promising future direction to improve the performance of neural networks in continual learning.

Continual Learning

Toward Efficient Gradient-Based Value Estimation

no code implementations31 Jan 2023 Arsalan SharifNassab, Richard Sutton

Gradient-based methods for value estimation in reinforcement learning have favorable stability properties, but they are typically much slower than Temporal Difference (TD) learning methods.

Auxiliary task discovery through generate-and-test

no code implementations25 Oct 2022 Banafsheh Rafiee, Sina Ghiassian, Jun Jin, Richard Sutton, Jun Luo, Adam White

In this paper, we explore an approach to auxiliary task discovery in reinforcement learning based on ideas from representation learning.

Meta-Learning Representation Learning

From Eye-blinks to State Construction: Diagnostic Benchmarks for Online Representation Learning

1 code implementation9 Nov 2020 Banafsheh Rafiee, Zaheer Abbas, Sina Ghiassian, Raksha Kumaraswamy, Richard Sutton, Elliot Ludvig, Adam White

We present three new diagnostic prediction problems inspired by classical-conditioning experiments to facilitate research in online prediction learning.

Continual Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.