On-Policy TD Control

Sarsa

Sarsa is an on-policy TD control algorithm:

$$Q\left(S_{t}, A_{t}\right) \leftarrow Q\left(S_{t}, A_{t}\right) + \alpha\left[R_{t+1} + \gamma{Q}\left(S_{t+1}, A_{t+1}\right) - Q\left(S_{t}, A_{t}\right)\right] $$

This update is done after every transition from a nonterminal state $S_{t}$. if $S_{t+1}$ is terminal, then $Q\left(S_{t+1}, A_{t+1}\right)$ is defined as zero.

To design an on-policy control algorithm using Sarsa, we estimate $q_{\pi}$ for a behaviour policy $\pi$ and then change $\pi$ towards greediness with respect to $q_{\pi}$.

Source: Sutton and Barto, Reinforcement Learning, 2nd Edition

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 32 55.17%
Decision Making 3 5.17%
Continuous Control 3 5.17%
Combinatorial Optimization 2 3.45%
OpenAI Gym 2 3.45%
Management 2 3.45%
Classification 1 1.72%
Autonomous Driving 1 1.72%
Board Games 1 1.72%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories