no code implementations • 6 Feb 2023 • Yash Chandak, Shiv Shankar, Venkata Gandikota, Philip S. Thomas, Arya Mazumdar
We propose a first-order method for convex optimization, where instead of being restricted to the gradient from a single parameter, gradients from multiple parameters can be used during each step of gradient descent.
no code implementations • NeurIPS 2021 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal
In this work, we study the number of measurements sufficient for recovering the supports of all the component vectors in a mixture in both these models.
no code implementations • NeurIPS 2020 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal
We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse.
no code implementations • 20 Feb 2020 • Venkata Gandikota, Arya Mazumdar, Ankit Singh Rawat
In this paper, we present distributed generalized clustering algorithms that can handle large scale data across multiple machines in spite of straggling or unreliable machines.
no code implementations • 18 Nov 2019 • Venkata Gandikota, Daniel Kane, Raj Kumar Maity, Arya Mazumdar
In this work, we present a family of vector quantization schemes \emph{vqSGD} (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees in first-order distributed optimization.