Search Results for author: Lucas Liebenwein

Found 13 papers, 7 papers with code

Pruning by Active Attention Manipulation

no code implementations20 Oct 2022 Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu

On CIFAR-10 dataset, without requiring a pre-trained baseline network, we obtain 1. 02% and 1. 19% accuracy gain and 52. 3% and 54% parameters reduction, on ResNet56 and ResNet110, respectively.

End-to-End Sensitivity-Based Filter Pruning

no code implementations15 Apr 2022 Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu

Moreover, by training the pruning scores of all layers simultaneously our method can account for layer interdependencies, which is essential to find a performant sparse sub-network.

Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition

2 code implementations NeurIPS 2021 Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus

We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.

Low-rank compression

Closed-form Continuous-time Neural Models

1 code implementation25 Jun 2021 Ramin Hasani, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron Ray, Max Tschaikowski, Gerald Teschl, Daniela Rus

To this end, we compute a tightly-bounded approximation of the solution of an integral appearing in LTCs' dynamics, that has had no known closed-form solution so far.

Sentiment Analysis Time Series Prediction

Sparse Flows: Pruning Continuous-depth Models

1 code implementation NeurIPS 2021 Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus

Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.

Low-Regret Active learning

no code implementations6 Apr 2021 Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus

We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i. e., active learning).

Active Learning Informativeness

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

1 code implementation4 Mar 2021 Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus

Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.

Network Pruning

Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space

1 code implementation19 Feb 2021 Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus

We demonstrate the effectiveness of our algorithm in learning competitive behaviors on a novel multi-agent racing benchmark that requires planning from image observations.

Reinforcement Learning (RL)

Machine Learning-based Estimation of Forest Carbon Stocks to increase Transparency of Forest Preservation Efforts

no code implementations17 Dec 2019 Björn Lütjens, Lucas Liebenwein, Katharina Kramer

LiDAR-based solutions, used in US forests, are accurate, but cost-prohibitive, and hardly-accessible in the Amazon rainforest.

BIG-bench Machine Learning

Provable Filter Pruning for Efficient Neural Networks

2 code implementations ICLR 2020 Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus

We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.

SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks

2 code implementations11 Oct 2019 Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus

We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.

Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

no code implementations ICLR 2019 Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus

We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output.

Generalization Bounds Neural Network Compression

Training Support Vector Machines using Coresets

no code implementations13 Aug 2017 Cenk Baykal, Lucas Liebenwein, Wilko Schwarting

We present a novel coreset construction algorithm for solving classification tasks using Support Vector Machines (SVMs) in a computationally efficient manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.