Search Results for author: Clayton Webster

Found 7 papers, 5 papers with code

Increasing Entropy to Boost Policy Gradient Performance on Personalization Tasks

1 code implementation9 Oct 2023 Andrew Starnes, Anton Dereventsov, Clayton Webster

In this effort, we consider the impact of regularization on the diversity of actions taken by policies generated from reinforcement learning agents trained using a policy gradient.

On the Unreasonable Efficiency of State Space Clustering in Personalization Tasks

2 code implementations24 Dec 2021 Anton Dereventsov, Ranga Raju Vatsavai, Clayton Webster

In this effort we consider a reinforcement learning (RL) technique for solving personalization tasks with complex reward signals.

Clustering reinforcement-learning +1

Offline Policy Comparison under Limited Historical Agent-Environment Interactions

1 code implementation7 Jun 2021 Anton Dereventsov, Joseph D. Daws Jr., Clayton Webster

We address the challenge of policy evaluation in real-world applications of reinforcement learning systems where the available historical data is limited due to ethical, practical, or security considerations.

Analysis of Deep Neural Networks with Quasi-optimal polynomial approximation rates

no code implementations4 Dec 2019 Joseph Daws, Clayton Webster

The construction of the proposed neural network is based on a quasi-optimal polynomial approximation.

Neural network integral representations with the ReLU activation function

no code implementations7 Oct 2019 Armenak Petrosyan, Anton Dereventsov, Clayton Webster

In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function.

Robust learning with implicit residual networks

1 code implementation24 May 2019 Viktor Reshniak, Clayton Webster

In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes.

Greedy Shallow Networks: An Approach for Constructing and Training Neural Networks

1 code implementation24 May 2019 Anton Dereventsov, Armenak Petrosyan, Clayton Webster

We present a greedy-based approach to construct an efficient single hidden layer neural network with the ReLU activation that approximates a target function.

Cannot find the paper you are looking for? You can Submit a new open access paper.