no code implementations • 6 Mar 2023 • Tolga Ergen, Halil Ibrahim Gulluk, Jonathan Lacotte, Mert Pilanci
We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold.
1 code implementation • 13 Feb 2023 • Ryumei Nakada, Halil Ibrahim Gulluk, Zhun Deng, Wenlong Ji, James Zou, Linjun Zhang
We show that the algorithm can detect the ground-truth pairs and improve performance by fully exploiting unpaired datasets.
1 code implementation • NeurIPS 2021 • Yue Sun, Adhyyan Narang, Halil Ibrahim Gulluk, Samet Oymak, Maryam Fazel
Specifically, for (1), we first show that learning the optimal representation coincides with the problem of designing a task-aware regularization to promote inductive bias.
no code implementations • 14 Feb 2021 • Halil Ibrahim Gulluk, Yue Sun, Samet Oymak, Maryam Fazel
We prove that subspace-based representations can be learned in a sample-efficient manner and provably benefit future tasks in terms of sample complexity.