no code implementations • 13 Feb 2024 • Haolin Zou, Arnab Auddy, Kamiar Rahnama Rad, Arian Maleki
Despite a large and significant body of recent work focused on estimating the out-of-sample risk of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing.
no code implementations • 26 Oct 2023 • Arnab Auddy, Haolin Zou, Kamiar Rahnama Rad, Arian Maleki
Recent theoretical work showed that approximate leave-one-out cross validation (ALO) is a computationally efficient and statistically reliable estimate of LO (and OO) for generalized linear models with differentiable regularizers.
1 code implementation • 3 Mar 2020 • Kamiar Rahnama Rad, Wenda Zhou, Arian Maleki
We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ can be less than one.
no code implementations • 5 Feb 2019 • Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu
This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.
2 code implementations • 30 Jan 2018 • Kamiar Rahnama Rad, Arian Maleki
Motivated by the low bias of the leave-one-out cross validation (LO) method, we propose a computationally efficient closed-form approximate leave-one-out formula (ALO) for a large class of regularized estimators.
Methodology
no code implementations • 24 Jun 2016 • Kamiar Rahnama Rad, Timothy A. Machado, Liam Paninski
On the other hand, sharing information between adjacent neurons can errantly degrade estimates of tuning functions across space if there are sharp discontinuities in tuning between nearby neurons.