no code implementations • 20 Oct 2023 • Jonathan Patsenker, Henry Li, Yuval Kluger
The exponential moving average (EMA) is a commonly used statistic for providing stable estimates of stochastic quantities in deep learning optimization.
no code implementations • 1 Jun 2023 • Amit Rozner, Barak Battash, Henry Li, Lior Wolf, Ofir Lindenbaum
Then, we design a variance stabilized density estimation problem for maximizing the likelihood of the observed samples while minimizing the variance of the density around normal samples.
no code implementations • 19 Oct 2022 • Henry Li, Yuval Kluger
We introduce a simple modification to the standard maximum likelihood estimation (MLE) framework.
1 code implementation • 22 Jun 2022 • Henry Li, Yuval Kluger
Any explicit functional representation $f$ of a density is hampered by two main obstacles when we wish to use it as a generative model: designing $f$ so that sampling is fast, and estimating $Z = \int f$ so that $Z^{-1}f$ integrates to 1.
1 code implementation • 29 Oct 2021 • Soham Jana, Henry Li, Yutaro Yamada, Ofir Lindenbaum
Consider the problem of simultaneous estimation and support recovery of the coefficient vector in a linear data model with additive Gaussian noise.
no code implementations • 1 Jan 2021 • Hannah Lawrence, David Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié
To the best of our knowledge, this is the first work to consider a dataset-free machine learning approach for holographic phase retrieval.
1 code implementation • 14 Dec 2020 • Hannah Lawrence, David A. Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié
Phase retrieval is the inverse problem of recovering a signal from magnitude-only Fourier measurements, and underlies numerous imaging modalities, such as Coherent Diffraction Imaging (CDI).
1 code implementation • ECCV 2020 • Henry Li, Ofir Lindenbaum, Xiuyuan Cheng, Alexander Cloninger
Variational autoencoders (VAEs) and generative adversarial networks (GANs) enjoy an intuitive connection to manifold learning: in training the decoder/generator is optimized to approximate a homeomorphism between the data distribution and the sampling space.
3 code implementations • ICLR 2018 • Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, Yuval Kluger
Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.