Search Results for author: Daniel LeJeune

Found 17 papers, 13 papers with code

Asymptotically free sketched ridge ensembles: Risks, cross-validation, and tuning

1 code implementation6 Oct 2023 Pratik Patil, Daniel LeJeune

We also propose an "ensemble trick" whereby the risk for unsketched ridge regression can be efficiently estimated via GCV using small sketched ridge ensembles.

Prediction Intervals regression

An Adaptive Tangent Feature Perspective of Neural Networks

1 code implementation29 Aug 2023 Daniel LeJeune, Sina AlEMohammad

In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training.

Self-Consuming Generative Models Go MAD

no code implementations4 Jul 2023 Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk

Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.

Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization

1 code implementation20 Oct 2022 Daniel LeJeune, Jiayu Liu, Reinhard Heckel

Machine learning systems are often applied to data that is drawn from a different distribution than the training distribution.

Relation

The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization

1 code implementation NeurIPS 2021 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training.

Extreme Compressed Sensing of Poisson Rates from Multiple Measurements

1 code implementation15 Mar 2021 Pavan K. Kota, Daniel LeJeune, Rebekah A. Drezek, Richard G. Baraniuk

Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution.

The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression

no code implementations9 Mar 2021 Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk

We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.

Philosophy regression +1

The Implicit Regularization of Ordinary Least Squares Ensembles

1 code implementation10 Oct 2019 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood.

Implicit Rugosity Regularization via Data Augmentation

no code implementations28 May 2019 Daniel LeJeune, Randall Balestriero, Hamid Javadi, Richard G. Baraniuk

Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks.

Data Augmentation

Thresholding Graph Bandits with GrAPL

1 code implementation22 May 2019 Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk

The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold.

Decision Making

Adaptive Estimation for Approximate k-Nearest-Neighbor Computations

1 code implementation25 Feb 2019 Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel

Algorithms often carry out equally many computations for "easy" and "hard" problem instances.

Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation ICML 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

BIG-bench Machine Learning feature selection

MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation12 Jun 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, Richard G. Baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

BIG-bench Machine Learning feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.