no code implementations • NeurIPS Workshop SVRHM 2021 • Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim
To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.
no code implementations • 29 Sep 2021 • Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim
To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.
no code implementations • 31 May 2020 • Sheng Xu, Zhou Fan, Sahand Negahban
We study estimation of a gradient-sparse parameter vector $\boldsymbol{\theta}^* \in \mathbb{R}^p$, having strong gradient-sparsity $s^*:=\|\nabla_G \boldsymbol{\theta}^*\|_0$ on an underlying graph $G$.
no code implementations • 22 Oct 2018 • Hamid Dadkhahi, Sahand Negahban
We consider the problem of online collaborative filtering in the online setting, where items are recommended to the users over time.
1 code implementation • ICML 2020 • Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger
Feature selection problems have been extensively studied for linear estimation, for instance, Lasso, but less emphasis has been placed on feature selection for non-linear functions.
no code implementations • NeurIPS 2017 • Addison Hu, Sahand Negahban
In particular, when the distribution over variables is assumed to be multivariate normal, the sparsity pattern in the inverse covariance matrix, commonly referred to as the precision matrix, corresponds to the adjacency matrix representation of the Gauss-Markov graph, which encodes conditional independence statements between variables.
no code implementations • 24 Apr 2017 • Sahand Negahban, Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu
This also allows one to compute similarities among users and items to be used for categorization and search.
no code implementations • 8 Mar 2017 • Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban, Joydeep Ghosh
Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions.
no code implementations • ICML 2017 • Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban
We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness.
no code implementations • 2 Dec 2016 • Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban
Our results extend the work of Das and Kempe (2011) from the setting of linear regression to arbitrary objective functions.
no code implementations • 30 Oct 2016 • Ningyuan Chen, Donald K. K. Lee, Sahand Negahban
Exploiting the fact that most arrival processes exhibit cyclic behaviour, we propose a simple procedure for estimating the intensity of a nonhomogeneous Poisson process.
no code implementations • 17 Nov 2015 • Uri Shaham, Yutaro Yamada, Sahand Negahban
We propose a general framework for increasing local stability of Artificial Neural Nets (ANNs) using Robust Optimization (RO).
no code implementations • NeurIPS 2012 • Alekh Agarwal, Sahand Negahban, Martin J. Wainwright
We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse.
no code implementations • NeurIPS 2012 • Sahand Negahban, Sewoong Oh, Devavrat Shah
In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e. g. player’s rating) is of interest to understanding the intensity of the preferences.
no code implementations • 8 Sep 2012 • Sahand Negahban, Sewoong Oh, Devavrat Shah
To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which each object has an associated score which determines the probabilistic outcomes of pair-wise comparisons between objects.
no code implementations • NeurIPS 2010 • Alekh Agarwal, Sahand Negahban, Martin J. Wainwright
Many statistical $M$-estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer.
no code implementations • NeurIPS 2009 • Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar
The estimation of high-dimensional parametric models requires imposing some structure on the models, for instance that they be sparse, or that matrix structured parameters have low rank.
no code implementations • NeurIPS 2008 • Sahand Negahban, Martin J. Wainwright
We consider the following instance of transfer learning: given a pair of regression problems, suppose that the regression coefficients share a partially common support, parameterized by the overlap fraction $\overlap$ between the two supports.