Search Results for author: Sahand Negahban

Found 18 papers, 1 papers with code

Exploiting 3D Shape Bias towards Robust Vision

no code implementations NeurIPS Workshop SVRHM 2021 Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim

To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.

3D Reconstruction

Geon3D: Exploiting 3D Shape Bias towards Building Robust Machine Vision

no code implementations29 Sep 2021 Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim

To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.

3D Reconstruction

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

no code implementations31 May 2020 Sheng Xu, Zhou Fan, Sahand Negahban

We study estimation of a gradient-sparse parameter vector $\boldsymbol{\theta}^* \in \mathbb{R}^p$, having strong gradient-sparsity $s^*:=\|\nabla_G \boldsymbol{\theta}^*\|_0$ on an underlying graph $G$.

Alternating Linear Bandits for Online Matrix-Factorization Recommendation

no code implementations22 Oct 2018 Hamid Dadkhahi, Sahand Negahban

We consider the problem of online collaborative filtering in the online setting, where items are recommended to the users over time.

Collaborative Filtering

Feature Selection using Stochastic Gates

1 code implementation ICML 2020 Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger

Feature selection problems have been extensively studied for linear estimation, for instance, Lasso, but less emphasis has been placed on feature selection for non-linear functions.

feature selection

Minimax Estimation of Bandable Precision Matrices

no code implementations NeurIPS 2017 Addison Hu, Sahand Negahban

In particular, when the distribution over variables is assumed to be multivariate normal, the sparsity pattern in the inverse covariance matrix, commonly referred to as the precision matrix, corresponds to the adjacency matrix representation of the Gauss-Markov graph, which encodes conditional independence statements between variables.

Learning from Comparisons and Choices

no code implementations24 Apr 2017 Sahand Negahban, Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu

This also allows one to compute similarities among users and items to be used for categorization and search.

Marketing Recommendation Systems

Scalable Greedy Feature Selection via Weak Submodularity

no code implementations8 Mar 2017 Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban, Joydeep Ghosh

Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions.

feature selection

On Approximation Guarantees for Greedy Low Rank Optimization

no code implementations ICML 2017 Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban

We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness.

Combinatorial Optimization

Restricted Strong Convexity Implies Weak Submodularity

no code implementations2 Dec 2016 Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban

Our results extend the work of Das and Kempe (2011) from the setting of linear regression to arbitrary objective functions.

feature selection

Super-resolution estimation of cyclic arrival rates

no code implementations30 Oct 2016 Ningyuan Chen, Donald K. K. Lee, Sahand Negahban

Exploiting the fact that most arrival processes exhibit cyclic behaviour, we propose a simple procedure for estimating the intensity of a nonhomogeneous Poisson process.

Super-Resolution

Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization

no code implementations17 Nov 2015 Uri Shaham, Yutaro Yamada, Sahand Negahban

We propose a general framework for increasing local stability of Artificial Neural Nets (ANNs) using Robust Optimization (RO).

Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

no code implementations NeurIPS 2012 Alekh Agarwal, Sahand Negahban, Martin J. Wainwright

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse.

Stochastic Optimization Vocal Bursts Intensity Prediction

Iterative ranking from pair-wise comparisons

no code implementations NeurIPS 2012 Sahand Negahban, Sewoong Oh, Devavrat Shah

In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e. g. player’s rating) is of interest to understanding the intensity of the preferences.

Rank Centrality: Ranking from Pair-wise Comparisons

no code implementations8 Sep 2012 Sahand Negahban, Sewoong Oh, Devavrat Shah

To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which each object has an associated score which determines the probabilistic outcomes of pair-wise comparisons between objects.

Fast global convergence rates of gradient methods for high-dimensional statistical recovery

no code implementations NeurIPS 2010 Alekh Agarwal, Sahand Negahban, Martin J. Wainwright

Many statistical $M$-estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer.

Computational Efficiency regression

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

no code implementations NeurIPS 2009 Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar

The estimation of high-dimensional parametric models requires imposing some structure on the models, for instance that they be sparse, or that matrix structured parameters have low rank.

Phase transitions for high-dimensional joint support recovery

no code implementations NeurIPS 2008 Sahand Negahban, Martin J. Wainwright

We consider the following instance of transfer learning: given a pair of regression problems, suppose that the regression coefficients share a partially common support, parameterized by the overlap fraction $\overlap$ between the two supports.

regression Transfer Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.