no code implementations • 29 Nov 2023 • Amin Rakhsha, Mete Kemertas, Mohammad Ghavamzadeh, Amir-Massoud Farahmand
We propose and theoretically analyze an approach for planning with an approximate model in reinforcement learning that can reduce the adverse impact of model error.
1 code implementation • 17 Jul 2023 • Mete Kemertas, Allan D. Jepson, Amir-Massoud Farahmand
We design a novel algorithm for optimal transport by drawing from the entropic optimal transport, mirror descent and conjugate gradients literatures.
1 code implementation • 6 Feb 2022 • Mete Kemertas, Allan Jepson
Based on these results, we design an API($\alpha$) procedure that employs conservative policy updates and enjoys better performance bounds than the naive API approach.
1 code implementation • NeurIPS 2021 • Mete Kemertas, Tristan Aumentado-Armstrong
Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy.
no code implementations • EACL 2021 • {\'A}kos K{\'a}d{\'a}r, Lan Xiao, Mete Kemertas, Federico Fancellu, Allan Jepson, Afsaneh Fazly
We do so by casting dependency parsing as a tree embedding problem where we incorporate geometric properties of dependency trees in the form of training losses within a graph-based parser.
no code implementations • CVPR 2020 • Mete Kemertas, Leila Pishdad, Konstantinos G. Derpanis, Afsaneh Fazly
We introduce an information-theoretic loss function, RankMI, and an associated training algorithm for deep representation learning for image retrieval.
no code implementations • 21 Aug 2019 • Tim Capes, Vishal Raheja, Mete Kemertas, Iqbal Mohomed
In this paper, we analyze the math behind ring architectures and make an informed adaptation of dynamic scheduling to ring architectures.