Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning

In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimization (PPO), IS weights are typically clipped to avoid large variance in learning. However, policy update from clipped statistics induces large bias in tasks with high action dimensions, and bias from clipping makes it difficult to reuse old samples with large IS weights... (read more)

Results in Papers With Code
(↓ scroll down to see all results)