Search Results for author: Richard Baraniuk

Found 51 papers, 18 papers with code

Deep Networks Always Grok and Here is Why

no code implementations23 Feb 2024 Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk

Our local complexity measures the density of the so-called 'linear regions' (aka, spline partition regions) that tile the DNN input space, and serves as a utile progress measure for training.

Perspectives on the State and Future of Deep Learning -- 2023

no code implementations7 Dec 2023 Micah Goldblum, Anima Anandkumar, Richard Baraniuk, Tom Goldstein, Kyunghyun Cho, Zachary C Lipton, Melanie Mitchell, Preetum Nakkiran, Max Welling, Andrew Gordon Wilson

The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time.

Benchmarking

Training Dynamics of Deep Network Linear Regions

no code implementations19 Oct 2023 Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk

First, we present a novel statistic that encompasses the local complexity (LC) of the DN based on the concentration of linear regions inside arbitrary dimensional neighborhoods around data points.

Memorization

MultiQG-TI: Towards Question Generation from Multi-modal Sources

1 code implementation7 Jul 2023 Zichao Wang, Richard Baraniuk

We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources.

Optical Character Recognition Question Generation +1

SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries

1 code implementation CVPR 2023 Ahmed Imtiaz Humayun, Randall Balestriero, Guha Balakrishnan, Richard Baraniuk

In this paper, we go one step further by developing the first provably exact method for computing the geometry of a DN's mapping - including its decision boundary - over a specified region of the data space.

Unsupervised Learning of Sampling Distributions for Particle Filters

no code implementations2 Feb 2023 Fernando Gama, Nicolas Zilberstein, Martin Sevilla, Richard Baraniuk, Santiago Segarra

Thus, the crux of particle filters lies in designing sampling distributions that are both easy to sample from and lead to accurate estimators.

Design Synthesis

Retrieval-based Controllable Molecule Generation

1 code implementation23 Aug 2022 Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard Baraniuk, Anima Anandkumar

On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods.

Drug Discovery Retrieval

Automated Scoring for Reading Comprehension via In-context BERT Tuning

1 code implementation19 May 2022 Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, Andrew Lan

Our approach, in-context BERT fine-tuning, produces a single shared scoring model for all items with a carefully-designed input structure to provide contextual information on each item.

Reading Comprehension

No More Than 6ft Apart: Robust K-Means via Radius Upper Bounds

1 code implementation4 Mar 2022 Ahmed Imtiaz Humayun, Randall Balestriero, Anastasios Kyrillidis, Richard Baraniuk

We propose to remedy such a scenario by introducing a maximal radius constraint $r$ on the clusters formed by the centroids, i. e., samples from the same cluster should not be more than $2r$ apart in terms of $\ell_2$ distance.

Clustering

Polarity Sampling: Quality and Diversity Control of Pre-Trained Generative Networks via Singular Values

1 code implementation CVPR 2022 Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk

We present Polarity Sampling, a theoretically justified plug-and-play method for controlling the generation quality and diversity of pre-trained deep generative networks DGNs).

Image Generation Unconditional Image Generation

Spatial Transformer K-Means

no code implementations16 Feb 2022 Romain Cosentino, Randall Balestriero, Yanis Bahroun, Anirvan Sengupta, Richard Baraniuk, Behnaam Aazhang

This enables (i) the reduction of intrinsic nuisances associated with the data, reducing the complexity of the clustering task and increasing performances and producing state-of-the-art results, (ii) clustering in the input space of the data, leading to a fully interpretable clustering algorithm, and (iii) the benefit of convergence guarantees.

Clustering

Covariate Balancing Methods for Randomized Controlled Trials Are Not Adversarially Robust

no code implementations25 Oct 2021 Hossein Babaei, Sina AlEMohammad, Richard Baraniuk

Covariate balancing methods increase the similarity between the distributions of the two groups' covariates.

Adversarial Attack

MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining

1 code implementation ICLR 2022 Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk

Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold and distribution.

Data Augmentation Domain Adaptation +2

Embedding models through the lens of Stable Coloring

no code implementations29 Sep 2021 Aditya Desai, Shashank Sonkar, Anshumali Shrivastava, Richard Baraniuk

Grounded on this framework, we show that many algorithms ranging across different domains are, in fact, searching for continuous stable coloring solutions of an underlying graph corresponding to the domain.

Denoising

A Step-Wise Weighting Approach for Controllable Text Generation

no code implementations29 Sep 2021 Zichao Wang, Weili Nie, Zhenwei Dai, Richard Baraniuk

Many existing approaches either require extensive training/fine-tuning of the LM for each single attribute under control or are slow to generate text.

Attribute Language Modelling +1

Thermal Image Processing via Physics-Inspired Deep Networks

1 code implementation18 Aug 2021 Vishwanath Saragadam, Akshat Dave, Ashok Veeraraghavan, Richard Baraniuk

We introduce DeepIR, a new thermal image processing framework that combines physically accurate sensor modeling with deep network-based image representation.

Denoising Sensor Modeling +1

Double Descent and Other Interpolation Phenomena in GANs

no code implementations7 Jun 2021 Lorenzo Luzi, Yehuda Dar, Richard Baraniuk

We show that overparameterization can improve generalization performance and accelerate the training process.

Math Operation Embeddings for Open-ended Solution Analysis and Feedback

no code implementations25 Apr 2021 Mengxue Zhang, Zichao Wang, Richard Baraniuk, Andrew Lan

Feedback on student answers and even during intermediate steps in their solutions to open-ended questions is an important element in math education.

Math

Fast Jacobian-Vector Product for Deep Networks

no code implementations1 Apr 2021 Randall Balestriero, Richard Baraniuk

Jacobian-vector products (JVPs) form the backbone of many recent developments in Deep Networks (DNs), with applications including faster constrained optimization, regularization with generalization guarantees, and adversarial example sensitivity assessments.

Max-Affine Spline Insights Into Deep Network Pruning

no code implementations7 Jan 2021 Haoran You, Randall Balestriero, Zhihan Lu, Yutong Kou, Huihong Shi, Shunyao Zhang, Shang Wu, Yingyan Lin, Richard Baraniuk

In this paper, we study the importance of pruning in Deep Networks (DNs) and the yin & yang relationship between (1) pruning highly overparametrized DNs that have been trained from random initialization and (2) training small DNs that have been "cleverly" initialized.

Network Pruning

SASSI -- Super-Pixelated Adaptive Spatio-Spectral Imaging

1 code implementation28 Dec 2020 Vishwanath Saragadam, Michael DeZeeuw, Richard Baraniuk, Ashok Veeraraghavan, Aswin Sankaranarayanan

Hence, a scene-adaptive spatial sampling of an hyperspectral scene, guided by its super-pixel segmented image, is capable of obtaining high-quality reconstructions.

Enhanced Recurrent Neural Tangent Kernels for Non-Time-Series Data

2 code implementations9 Dec 2020 Sina AlEMohammad, Randall Balestriero, Zichao Wang, Richard Baraniuk

Kernels derived from deep neural networks (DNNs) in the infinite-width regime provide not only high performance in a range of machine learning tasks but also new theoretical insights into DNN training dynamics and generalization.

Time Series Time Series Analysis

Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks

no code implementations NeurIPS 2020 Randall Balestriero, Sebastien Paris, Richard Baraniuk

Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs).

Anomaly Detection Imputation +1

Deep Autoencoders: From Understanding to Generalization Guarantees

no code implementations20 Sep 2020 Romain Cosentino, Randall Balestriero, Richard Baraniuk, Behnaam Aazhang

Our regularizations leverage recent advances in the group of transformation learning to enable AEs to better approximate the data manifold without explicitly defining the group underlying the manifold.

Denoising

The Recurrent Neural Tangent Kernel

no code implementations ICLR 2021 Sina Al-E-Mohammad, Zichao Wang, Randall Balestriero, Richard Baraniuk

The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization.

VarFA: A Variational Factor Analysis Framework For Efficient Bayesian Learning Analytics

no code implementations27 May 2020 Zichao Wang, Yi Gu, Andrew Lan, Richard Baraniuk

We propose VarFA, a variational inference factor analysis framework that extends existing factor analysis models for educational data mining to efficiently output uncertainty estimation in the model's estimated factors.

Bayesian Inference Variational Inference

Max-Affine Spline Insights into Deep Generative Networks

1 code implementation26 Feb 2020 Randall Balestriero, Sebastien Paris, Richard Baraniuk

We also derive the output probability density mapped onto the generated manifold in terms of the latent space density, which enables the computation of key statistics such as its Shannon entropy.

Disentanglement

A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS

no code implementations25 Sep 2019 Lorenzo Luzi, Randall Balestriero, Richard Baraniuk

We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution.

Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

no code implementations10 Jul 2019 Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin

State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications.

The Geometry of Deep Networks: Power Diagram Subdivision

1 code implementation NeurIPS 2019 Randall Balestriero, Romain Cosentino, Behnaam Aazhang, Richard Baraniuk

The subdivision process constrains the affine maps on the (exponentially many) power diagram regions to greatly reduce their complexity.

A MAX-AFFINE SPLINE PERSPECTIVE OF RECURRENT NEURAL NETWORKS

no code implementations ICLR 2019 Zichao Wang, Randall Balestriero, Richard Baraniuk

Second, we show that the affine parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input.

L2 Regularization Template Matching

EnergyNet: Energy-Efficient Dynamic Inference

no code implementations NIPS Workshop CDNNRIA 2018 Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk

The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices.

Spline Filters For End-to-End Deep Learning

no code implementations ICML 2018 Randall Balestriero, Romain Cosentino, Herve Glotin, Richard Baraniuk

We propose to tackle the problem of end-to-end learning for raw waveform signals by introducing learnable continuous time-frequency atoms.

Mad Max: Affine Spline Insights into Deep Learning

no code implementations17 May 2018 Randall Balestriero, Richard Baraniuk

For instance, conditioned on the input signal, the output of a MASO DN can be written as a simple affine transformation of the input.

Clustering General Classification +2

Semi-Supervised Learning Enabled by Multiscale Deep Neural Network Inversion

no code implementations27 Feb 2018 Randall Balestriero, Herve Glotin, Richard Baraniuk

Deep Neural Networks (DNNs) provide state-of-the-art solutions in several difficult machine perceptual tasks.

Deep Neural Networks

no code implementations25 Oct 2017 Randall Balestriero, Richard Baraniuk

Deep Neural Networks (DNNs) are universal function approximators providing state-of- the-art solutions on wide range of applications.

Image Classification Object Tracking +2

FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation

2 code implementations1 Sep 2015 M. Salman Asif, Ali Ayremlou, Aswin Sankaranarayanan, Ashok Veeraraghavan, Richard Baraniuk

FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array.

Image Reconstruction

FASTA: A Generalized Implementation of Forward-Backward Splitting

2 code implementations16 Jan 2015 Tom Goldstein, Christoph Studer, Richard Baraniuk

This is a user manual for the software package FASTA.

Mathematical Software Numerical Analysis Numerical Analysis

Fast Sublinear Sparse Representation using Shallow Tree Matching Pursuit

no code implementations1 Dec 2014 Ali Ayremlou, Thomas Goldstein, Ashok Veeraraghavan, Richard Baraniuk

Sparse approximations using highly over-complete dictionaries is a state-of-the-art tool for many imaging applications including denoising, super-resolution, compressive sensing, light-field analysis, and object recognition.

Compressive Sensing Image Denoising +2

A Field Guide to Forward-Backward Splitting with a FASTA Implementation

4 code implementations13 Nov 2014 Tom Goldstein, Christoph Studer, Richard Baraniuk

Non-differentiable and constrained optimization play a key role in machine learning, signal and image processing, communications, and beyond.

Numerical Analysis G.1.6

When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements

no code implementations NeurIPS 2013 Divyanshu Vats, Richard Baraniuk

We consider the problem of accurately estimating a high-dimensional sparse vector using a small number of linear measurements that are contaminated by noise.

Vocal Bursts Intensity Prediction

Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems

1 code implementation2 May 2013 Tom Goldstein, Min Li, Xiaoming Yuan, Ernie Esser, Richard Baraniuk

The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps.

Numerical Analysis 65K15 G.1.6

SpaRCS: Recovering low-rank and sparse matrices from compressive measurements

no code implementations NeurIPS 2011 Andrew E. Waters, Aswin C. Sankaranarayanan, Richard Baraniuk

We consider the problem of recovering a matrix $\mathbf{M}$ that is the sum of a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from a small set of linear measurements of the form $\mathbf{y} = \mathcal{A}(\mathbf{M}) = \mathcal{A}({\bf L}+{\bf S})$.

Compressive Sensing Matrix Completion +1

Sparse Signal Recovery Using Markov Random Fields

no code implementations NeurIPS 2008 Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk

Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals.

Compressive Sensing

Random Projections for Manifold Learning

no code implementations NeurIPS 2007 Chinmay Hegde, Michael Wakin, Richard Baraniuk

First, we show that with a small number $M$ of {\em random projections} of sample points in $\reals^N$ belonging to an unknown $K$-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.