1 code implementation • NeurIPS 2023 • Ev Zisselman, Itai Lavie, Daniel Soudry, Aviv Tamar
Our insight is that learning a policy that effectively $\textit{explores}$ the domain is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches.
no code implementations • 24 Sep 2021 • Aviv Tamar, Daniel Soudry, Ev Zisselman
In the Bayesian reinforcement learning (RL) setting, a prior distribution over the unknown problem parameters -- the rewards and transitions -- is assumed, and a policy that optimizes the (posterior) expected return is sought.
1 code implementation • CVPR 2020 • Ev Zisselman, Aviv Tamar
Specifically, we demonstrate the effectiveness of our method in ResNet and DenseNet architectures trained on various image datasets.
1 code implementation • CVPR 2019 • Ev Zisselman, Jeremias Sulam, Michael Elad
The Convolutional Sparse Coding (CSC) model has recently gained considerable traction in the signal and image processing communities.
2 code implementations • 1 Nov 2018 • Ev Zisselman, Jeremias Sulam, Michael Elad
The Convolutional Sparse Coding (CSC) model has recently gained considerable traction in the signal and image processing communities.