Search Results for author: Nasimul Noman

Found 4 papers, 0 papers with code

Boosting Exploration in Actor-Critic Algorithms by Incentivizing Plausible Novel States

no code implementations1 Oct 2022 Chayan Banerjee, Zhiyong Chen, Nasimul Noman

Actor-critic (AC) algorithms are a class of model-free deep reinforcement learning algorithms, which have proven their efficacy in diverse domains, especially in solving continuous control problems.

Continuous Control

Improved Soft Actor-Critic: Mixing Prioritized Off-Policy Samples with On-Policy Experience

no code implementations24 Sep 2021 Chayan Banerjee, Zhiyong Chen, Nasimul Noman

It is comparatively more stable and sample efficient when tested on a number of continuous control tasks in MuJoCo environments.

Continuous Control

Optimal Actor-Critic Policy with Optimized Training Datasets

no code implementations16 Aug 2021 Chayan Banerjee, Zhiyong Chen, Nasimul Noman, Mohsen Zamani

Actor-critic (AC) algorithms are known for their efficacy and high performance in solving reinforcement learning problems, but they also suffer from low sampling efficiency.

Cannot find the paper you are looking for? You can Submit a new open access paper.