no code implementations • 20 Oct 2022 • Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu
On CIFAR-10 dataset, without requiring a pre-trained baseline network, we obtain 1. 02% and 1. 19% accuracy gain and 52. 3% and 54% parameters reduction, on ResNet56 and ResNet110, respectively.
no code implementations • 15 Apr 2022 • Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu
Moreover, by training the pruning scores of all layers simultaneously our method can account for layer interdependencies, which is essential to find a performant sparse sub-network.
2 code implementations • NeurIPS 2021 • Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.
1 code implementation • 25 Jun 2021 • Ramin Hasani, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron Ray, Max Tschaikowski, Gerald Teschl, Daniela Rus
To this end, we compute a tightly-bounded approximation of the solution of an integral appearing in LTCs' dynamics, that has had no known closed-form solution so far.
Ranked #36 on Sentiment Analysis on IMDb
1 code implementation • NeurIPS 2021 • Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus
Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.
no code implementations • 6 Apr 2021 • Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i. e., active learning).
1 code implementation • 4 Mar 2021 • Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus
Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.
1 code implementation • 19 Feb 2021 • Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus
We demonstrate the effectiveness of our algorithm in learning competitive behaviors on a novel multi-agent racing benchmark that requires planning from image observations.
no code implementations • 17 Dec 2019 • Björn Lütjens, Lucas Liebenwein, Katharina Kramer
LiDAR-based solutions, used in US forests, are accurate, but cost-prohibitive, and hardly-accessible in the Amazon rainforest.
2 code implementations • ICLR 2020 • Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus
We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.
2 code implementations • 11 Oct 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.
no code implementations • ICLR 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output.
no code implementations • 13 Aug 2017 • Cenk Baykal, Lucas Liebenwein, Wilko Schwarting
We present a novel coreset construction algorithm for solving classification tasks using Support Vector Machines (SVMs) in a computationally efficient manner.