Search Results for author: David L. Donoho

Found 7 papers, 6 papers with code

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

no code implementations1 Apr 2024 Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, Henry Sleight, John Hughes, Tomasz Korbak, Rajashree Agrawal, Dhruv Pai, Andrey Gromov, Daniel A. Roberts, Diyi Yang, David L. Donoho, Sanmi Koyejo

The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when these models are trained on their own generated outputs?

Image Generation

Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path

1 code implementation ICLR 2022 X. Y. Han, Vardan Papyan, David L. Donoho

The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss, inspiring us to leverage MSE loss towards the theoretical investigation of NC.

Prevalence of Neural Collapse during the terminal phase of deep learning training

1 code implementation18 Aug 2020 Vardan Papyan, X. Y. Han, David L. Donoho

Modern practice for training classification deepnets involves a Terminal Phase of Training (TPT), which begins at the epoch where training error first vanishes; During TPT, the training error stays effectively zero while training loss is pushed towards zero.

Inductive Bias

Optimal Shrinkage of Singular Values

1 code implementation29 May 2014 Matan Gavish, David L. Donoho

For a variety of loss functions, including Mean Square Error (MSE - square Frobenius norm), the nuclear norm loss and the operator norm loss, we show that in this framework there is a well-defined asymptotic loss that we evaluate precisely in each case.

Statistics Theory Statistics Theory

The Optimal Hard Threshold for Singular Values is 4/sqrt(3)

3 code implementations24 May 2013 Matan Gavish, David L. Donoho

In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the low-rank matrix we are trying to recover.

Methodology

The Noise-Sensitivity Phase Transition in Compressed Sensing

1 code implementation8 Apr 2010 David L. Donoho, Arian Maleki, Andrea Montanari

We develop formal expressions for the MSE of \hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals.

Statistics Theory Information Theory Information Theory Statistics Theory

Does median filtering truly preserve edges better than linear filtering?

1 code implementation14 Dec 2006 Ery Arias-Castro, David L. Donoho

We show that median filtering and linear filtering have similar asymptotic worst-case mean-squared error (MSE) when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal.

Statistics Theory Statistics Theory 62G08, 62G20 (Primary) 60G35 (Secondary)

Cannot find the paper you are looking for? You can Submit a new open access paper.