no code implementations • 6 Jun 2024 • Can Yaras, Peng Wang, Laura Balzano, Qing Qu
In practice, we demonstrate the effectiveness of this approach for deep low-rank matrix completion as well as fine-tuning language models.
1 code implementation • 6 Nov 2023 • Peng Wang, Xiao Li, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu
To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.
1 code implementation • 1 Jun 2023 • Can Yaras, Peng Wang, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu
Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.
1 code implementation • 19 Sep 2022 • Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu
When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon.
no code implementations • 28 Apr 2021 • Can Yaras, Kaleb Kassaw, Bohao Huang, Kyle Bradbury, Jordan M. Malof
Modern deep neural networks (DNNs) are highly accurate on many recognition tasks for overhead (e. g., satellite) imagery.