Search Results for author: Adam Klivans

Found 20 papers, 4 papers with code

Network Pruning by Greedy Subnetwork Selection

no code implementations ICML 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.

Network Pruning

One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from Electromagnetic Solvers

1 code implementation6 Jun 2023 Sriram Ravula, Varun Gorti, Bo Deng, Swagato Chakraborty, James Pingenot, Bhyrav Mutnury, Doug Wallace, Doug Winterberg, Adam Klivans, Alexandros G. Dimakis

DIP is a technique that optimizes the weights of a randomly-initialized convolutional neural network to fit a signal from noisy or under-determined measurements.

Ambient Diffusion: Learning Clean Distributions from Corrupted Data

1 code implementation NeurIPS 2023 Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans

We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.

Efficiently Learning One Hidden Layer ReLU Networks From Queries

no code implementations NeurIPS 2021 Sitan Chen, Adam Klivans, Raghu Meka

While the problem of PAC learning neural networks from samples has received considerable attention in recent years, in certain settings like model extraction attacks, it is reasonable to imagine having more than just the ability to observe random labeled examples.

Model extraction PAC learning

Tight Hardness Results for Training Depth-2 ReLU Networks

no code implementations27 Nov 2020 Surbhi Goel, Adam Klivans, Pasin Manurangsi, Daniel Reichman

We are also able to obtain lower bounds on the running time in terms of the desired additive error $\epsilon$.

The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning

no code implementations22 Oct 2020 Aravind Gollakota, Sushrut Karmalkar, Adam Klivans

Generalizing a beautiful work of Malach and Shalev-Shwartz (2022) that gave tight correlational SQ (CSQ) lower bounds for learning DNF formulas, we give new proofs that lower bounds on the threshold or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.

From Boltzmann Machines to Neural Networks and Back Again

no code implementations NeurIPS 2020 Surbhi Goel, Adam Klivans, Frederic Koehler

Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult.

Statistical-Query Lower Bounds via Functional Gradients

no code implementations NeurIPS 2020 Surbhi Goel, Aravind Gollakota, Adam Klivans

We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e. g., ReLU, sigmoid, sign).

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

1 code implementation3 Mar 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.

Network Pruning

Efficient Algorithms for Outlier-Robust Regression

no code implementations8 Mar 2018 Adam Klivans, Pravesh K. Kothari, Raghu Meka

We give the first polynomial-time algorithm for performing linear or polynomial regression resilient to adversarial corruptions in both examples and labels.

regression

Learning One Convolutional Layer with Overlapping Patches

no code implementations ICML 2018 Surbhi Goel, Adam Klivans, Raghu Meka

We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches.

Learning Neural Networks with Two Nonlinear Layers in Polynomial Time

no code implementations18 Sep 2017 Surbhi Goel, Adam Klivans

We give a polynomial-time algorithm for learning neural networks with one layer of sigmoids feeding into any Lipschitz, monotone activation function (e. g., sigmoid or ReLU).

Learning Theory PAC learning +1

Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks

no code implementations NeurIPS 2017 Surbhi Goel, Adam Klivans

In this work we show that a natural distributional assumption corresponding to {\em eigenvalue decay} of the Gram matrix yields polynomial-time algorithms in the non-realizable setting for expressive classes of networks (e. g. feed-forward networks of ReLUs).

Learning Graphical Models Using Multiplicative Weights

no code implementations20 Jun 2017 Adam Klivans, Raghu Meka

Our main application is an algorithm for learning the structure of t-wise MRFs with nearly-optimal sample complexity (up to polynomial losses in necessary terms that depend on the weights) and running time that is $n^{O(t)}$.

Hyperparameter Optimization: A Spectral Approach

1 code implementation ICLR 2018 Elad Hazan, Adam Klivans, Yang Yuan

In particular, we obtain the first quasi-polynomial time algorithm for learning noisy decision trees with polynomial sample complexity.

Bayesian Optimization Hyperparameter Optimization

Exact MAP Inference by Avoiding Fractional Vertices

no code implementations ICML 2017 Erik M. Lindgren, Alexandros G. Dimakis, Adam Klivans

We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size.

Open-Ended Question Answering

Reliably Learning the ReLU in Polynomial Time

no code implementations30 Nov 2016 Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler

These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where $\epsilon$ must be $\Omega(1)$ and strong assumptions are required on the marginal distribution.

Sparse Polynomial Learning and Graph Sketching

no code implementations NeurIPS 2014 Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis, Adam Klivans

We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on $\{-1, 1\}^n$ that runs in time polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities.

Cannot find the paper you are looking for? You can Submit a new open access paper.