no code implementations • 12 Jan 2024 • Gantavya Bhatt, Yifang Chen, Arnav M. Das, Jifan Zhang, Sang T. Truong, Stephen Mussmann, Yinglun Zhu, Jeffrey Bilmes, Simon S. Du, Kevin Jamieson, Jordan T. Ash, Robert D. Nowak
To mitigate the annotation cost of SFT and circumvent the computational bottlenecks of active learning, we propose using experimental design.
no code implementations • 28 Jul 2023 • Ronald DeVore, Robert D. Nowak, Rahul Parhi, Jonathan W. Siegel
A new and more proper definition of model classes on domains is given by introducing the concept of weighted variation spaces.
1 code implementation • 25 May 2023 • Joseph Shenouda, Rahul Parhi, Kangwook Lee, Robert D. Nowak
This representer theorem establishes that shallow vector-valued neural networks are the solutions to data-fitting problems over these infinite-dimensional spaces, where the network widths are bounded by the square of the number of training data.
no code implementations • 15 Feb 2023 • Danica Fliss, Willem Marais, Robert D. Nowak
These are often referred to as ``plug-and-play" (PnP) methods because, in principle, an off-the-shelf denoiser can be used for a variety of different inverse problems.
no code implementations • 23 Jan 2023 • Rahul Parhi, Robert D. Nowak
Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks.
no code implementations • 6 Oct 2022 • Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, Robert D. Nowak
Weight decay is one of the most widely used forms of regularization in deep learning, and has been shown to improve generalization and robustness.
no code implementations • 18 Sep 2021 • Rahul Parhi, Robert D. Nowak
We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks.
no code implementations • 7 May 2021 • Rahul Parhi, Robert D. Nowak
The function space consists of compositions of functions from the Banach spaces of second-order bounded variation in the Radon domain.
no code implementations • 10 Jun 2020 • Rahul Parhi, Robert D. Nowak
We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to these inverse problems.
no code implementations • 3 Feb 2020 • Matthew L. Malloy, Ardhendu Tripathy, Robert D. Nowak
More precisely, consider an empirical distribution $\widehat{\boldsymbol{p}}$ generated from $n$ iid realizations of a random variable that takes one of $k$ possible values according to an unknown distribution $\boldsymbol{p}$.
no code implementations • 5 Oct 2019 • Rahul Parhi, Robert D. Nowak
A wide variety of activation functions have been proposed for neural networks.
no code implementations • 29 May 2019 • Mina Karzand, Robert D. Nowak
Generating labeled training datasets has become a major bottleneck in Machine Learning (ML) pipelines.
no code implementations • 26 Apr 2018 • Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak
This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.
1 code implementation • ICML 2017 • Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano
We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.
no code implementations • 9 Mar 2015 • Daniel L. Pimentel-Alarcón, Nigel Boston, Robert D. Nowak
Finite completability is the tipping point in LRMC, as a few additional samples of a finitely completable matrix guarantee its unique completability.
no code implementations • 2 Oct 2014 • Daniel L. Pimentel-Alarcón, Robert D. Nowak, Nigel Boston
Consider a generic $r$-dimensional subspace of $\mathbb{R}^d$, $r<d$, and suppose that we are only given projections of this subspace onto small subsets of the canonical coordinates.
no code implementations • 14 Sep 2014 • Mario A. T. Figueiredo, Robert D. Nowak
This paper studies ordered weighted L1 (OWL) norm regularization for sparse estimation problems with strongly correlated variables.
no code implementations • 13 Apr 2014 • Divyanshu Vats, Robert D. Nowak, Richard G. Baraniuk
This paper studies graphical model selection, i. e., the problem of estimating a graph of statistical relationships among a collection of random variables.
no code implementations • 26 Jun 2013 • Matthew L. Malloy, Robert D. Nowak
The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be near-optimal in that it succeeds at the lowest possible signal-to-noise-ratio (SNR) levels, improving on previous work in adaptive compressed sensing.
no code implementations • 6 Sep 2012 • Matthew L. Malloy, Gongguo Tang, Robert D. Nowak
We consider a large number of populations, each corresponding to either distribution P0 or P1.
no code implementations • NeurIPS 2011 • Kevin G. Jamieson, Robert D. Nowak
We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average.
no code implementations • 22 Oct 2009 • Robert D. Nowak
GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries.