no code implementations • 2 Feb 2024 • Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell
This paper explores challenges in training Physics-Informed Neural Networks (PINNs), emphasizing the role of the loss landscape in the training process.
1 code implementation • 5 Sep 2023 • Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell
This paper introduces PROMISE ($\textbf{Pr}$econditioned Stochastic $\textbf{O}$ptimization $\textbf{M}$ethods by $\textbf{I}$ncorporating $\textbf{S}$calable Curvature $\textbf{E}$stimates), a suite of sketching-based preconditioned stochastic gradient algorithms for solving large-scale convex optimization problems arising in machine learning.
1 code implementation • 24 Apr 2023 • Mateo Díaz, Ethan N. Epperly, Zachary Frangella, Joel A. Tropp, Robert J. Webber
This paper introduces two randomized preconditioning techniques for robustly solving kernel ridge regression (KRR) problems with a medium to large number of data points ($10^4 \leq N \leq 10^7$).
1 code implementation • 16 Nov 2022 • Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell
Numerical experiments on both ridge and logistic regression problems with dense and sparse data, show that SketchySGD equipped with its default hyperparameters can achieve comparable or better results than popular stochastic gradient methods, even when they have been tuned to yield their best performance.
no code implementations • NeurIPS 2021 • William T. Stephenson, Zachary Frangella, Madeleine Udell, Tamara Broderick
In the present paper, we show that, in the case of ridge regression, the CV loss may fail to be quasiconvex and thus may have multiple local optima.