Search Results for author: Dogan C. Cicek

Found 6 papers, 4 papers with code

Actor Prioritized Experience Replay

1 code implementation1 Sep 2022 Baturay Saglam, Furkan B. Mutlu, Dogan C. Cicek, Suleyman S. Kozat

A widely-studied deep reinforcement learning (RL) technique known as Prioritized Experience Replay (PER) allows agents to learn from transitions sampled with non-uniform probability proportional to their temporal-difference (TD) error.

Continuous Control Reinforcement Learning (RL)

Mitigating Off-Policy Bias in Actor-Critic Methods with One-Step Q-learning: A Novel Correction Approach

1 code implementation1 Aug 2022 Baturay Saglam, Dogan C. Cicek, Furkan B. Mutlu, Suleyman S. Kozat

Compared to on-policy counterparts, off-policy model-free deep reinforcement learning can improve data efficiency by repeatedly using the previously gathered data.

Continuous Control Q-Learning +2

Safe and Robust Experience Sharing for Deterministic Policy Gradient Algorithms

1 code implementation27 Jul 2022 Baturay Saglam, Dogan C. Cicek, Furkan B. Mutlu, Suleyman S. Kozat

Learning in high dimensional continuous tasks is challenging, mainly when the experience replay memory is very limited.

Continuous Control OpenAI Gym +1

AWD3: Dynamic Reduction of the Estimation Bias

no code implementations12 Nov 2021 Dogan C. Cicek, Enes Duran, Baturay Saglam, Kagan Kaya, Furkan B. Mutlu, Suleyman S. Kozat

We show through continuous control environments of OpenAI gym that our algorithm matches or outperforms the state-of-the-art off-policy policy gradient learning algorithms.

Continuous Control OpenAI Gym +1

Off-Policy Correction for Deep Deterministic Policy Gradient Algorithms via Batch Prioritized Experience Replay

no code implementations2 Nov 2021 Dogan C. Cicek, Enes Duran, Baturay Saglam, Furkan B. Mutlu, Suleyman S. Kozat

In addition, experience replay stores the transitions are generated by the previous policies of the agent that may significantly deviate from the most recent policy of the agent.

Computational Efficiency Continuous Control

Estimation Error Correction in Deep Reinforcement Learning for Deterministic Actor-Critic Methods

1 code implementation22 Sep 2021 Baturay Saglam, Enes Duran, Dogan C. Cicek, Furkan B. Mutlu, Suleyman S. Kozat

We show that in deep actor-critic methods that aim to overcome the overestimation bias, if the reinforcement signals received by the agent have a high variance, a significant underestimation bias arises.

Continuous Control OpenAI Gym +3

Cannot find the paper you are looking for? You can Submit a new open access paper.