no code implementations • 3 Jun 2022 • Vignesh Subramanian, Rahul Arya, Anant Sahai
Via an overparameterized linear model with Gaussian features, we provide conditions for good generalization for multiclass classification of minimum-norm interpolating solutions in an asymptotic setting where both the number of underlying features and the number of classes scale with the number of training points.
no code implementations • 16 May 2020 • Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai
We compare classification and regression tasks in an overparameterized linear model with Gaussian features.
no code implementations • 21 Oct 2019 • Anant Sahai, Joshua Sanz, Vignesh Subramanian, Caryn Tran, Kailas Vodrahalli
We investigate whether learning is possible under different levels of information sharing between distributed agents which are not necessarily co-designed.
no code implementations • 21 Mar 2019 • Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, Anant Sahai
A continuing mystery in understanding the empirical success of deep neural networks is their ability to achieve zero training error and generalize well, even when the training data is noisy and there are more parameters than data points.