Search Results for author: Zachary Frangella

Found 5 papers, 3 papers with code

Challenges in Training PINNs: A Loss Landscape Perspective

no code implementations2 Feb 2024 Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell

This paper explores challenges in training Physics-Informed Neural Networks (PINNs), emphasizing the role of the loss landscape in the training process.

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates

1 code implementation5 Sep 2023 Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell

This paper introduces PROMISE ($\textbf{Pr}$econditioned Stochastic $\textbf{O}$ptimization $\textbf{M}$ethods by $\textbf{I}$ncorporating $\textbf{S}$calable Curvature $\textbf{E}$stimates), a suite of sketching-based preconditioned stochastic gradient algorithms for solving large-scale convex optimization problems arising in machine learning.

Stochastic Optimization

Robust, randomized preconditioning for kernel ridge regression

1 code implementation24 Apr 2023 Mateo Díaz, Ethan N. Epperly, Zachary Frangella, Joel A. Tropp, Robert J. Webber

This paper introduces two randomized preconditioning techniques for robustly solving kernel ridge regression (KRR) problems with a medium to large number of data points ($10^4 \leq N \leq 10^7$).

regression

SketchySGD: Reliable Stochastic Optimization via Randomized Curvature Estimates

1 code implementation16 Nov 2022 Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell

Numerical experiments on both ridge and logistic regression problems with dense and sparse data, show that SketchySGD equipped with its default hyperparameters can achieve comparable or better results than popular stochastic gradient methods, even when they have been tuned to yield their best performance.

regression Stochastic Optimization

Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression

no code implementations NeurIPS 2021 William T. Stephenson, Zachary Frangella, Madeleine Udell, Tamara Broderick

In the present paper, we show that, in the case of ridge regression, the CV loss may fail to be quasiconvex and thus may have multiple local optima.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.