no code implementations • 31 Oct 2023 • Haolun Wu, Ofer Meshi, Masrour Zoghi, Fernando Diaz, Xue Liu, Craig Boutilier, Maryam Karimzadehgan
Accurate modeling of the diverse and dynamic interests of users remains a significant challenge in the design of personalized recommender systems.
1 code implementation • 25 Jan 2023 • Javad Azizi, Ofer Meshi, Masrour Zoghi, Maryam Karimzadehgan
The recent literature on online learning to rank (LTR) has established the utility of prior knowledge to Bayesian ranking bandit algorithms.
no code implementations • 29 May 2019 • Martin Mladenov, Ofer Meshi, Jayden Ooi, Dale Schuurmans, Craig Boutilier
Latent-state environments with long horizons, such as those faced by recommender systems, pose significant challenges for reinforcement learning (RL).
no code implementations • 4 Apr 2019 • Chih-Wei Hsu, Branislav Kveton, Ofer Meshi, Martin Mladenov, Csaba Szepesvari
In this work, we pioneer the idea of algorithm design by minimizing the empirical Bayes regret, the average regret over problem instances sampled from a known distribution.
1 code implementation • NeurIPS 2018 • Colin Graber, Ofer Meshi, Alexander Schwing
Deep structured models are widely used for tasks like semantic segmentation, where explicit correlations between variables provide important prior information which generally helps to reduce the data needs of deep nets.
2 code implementations • ICLR 2019 • Irwan Bello, Sayali Kulkarni, Sagar Jain, Craig Boutilier, Ed Chi, Elad Eban, Xiyang Luo, Alan Mackey, Ofer Meshi
Ranking is a central task in machine learning and information retrieval.
no code implementations • 7 May 2018 • Craig Boutilier, Alon Cohen, Amit Daniely, Avinatan Hassidim, Yishay Mansour, Ofer Meshi, Martin Mladenov, Dale Schuurmans
From an RL perspective, we show that Q-learning with sampled action sets is sound.
no code implementations • NeurIPS 2017 • Ofer Meshi, Alexander Schwing
Finding the maximum a-posteriori (MAP) assignment is a central task in graphical models.
no code implementations • NeurIPS 2016 • Dan Garber, Ofer Meshi
Moreover, in case the optimal solution is sparse, the new convergence rate replaces a factor which is at least linear in the dimension in previous works, with a linear dependence on the number of non-zeros in the optimal solution.
no code implementations • NeurIPS 2015 • Ofer Meshi, Mehrdad Mahdavi, Alex Schwing
Maximum a-posteriori (MAP) inference is an important task for many applications.
no code implementations • 4 Nov 2015 • Ofer Meshi, Mehrdad Mahdavi, Adrian Weller, David Sontag
Structured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees.
no code implementations • 20 Oct 2015 • Heejin Choi, Ofer Meshi, Nathan Srebro
We present an efficient method for training slack-rescaled structural SVM.
no code implementations • 26 Sep 2013 • Ofer Meshi, Elad Eban, Gal Elidan, Amir Globerson
We demonstrate the effectiveness of our approach on several domains and show that, despite the relative simplicity of the structure, prediction accuracy is competitive with a fully connected model that is computationally costly at prediction time.
no code implementations • NeurIPS 2012 • Ofer Meshi, Amir Globerson, Tommi S. Jaakkola
We also provide a simple dual to primal mapping that yields feasible primal solutions with a guaranteed rate of convergence.
no code implementations • NeurIPS 2010 • David Sontag, Ofer Meshi, Amir Globerson, Tommi S. Jaakkola
The problem of learning to predict structured labels is of key importance in many applications.