1 code implementation • 7 Jan 2024 • Jiatai Tong, Junyang Cai, Thiago Serra
Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification, compression, and optimization with learned constraints.
no code implementations • 27 Dec 2023 • Fabian Badilla, Marcos Goycoolea, Gonzalo Muñoz, Thiago Serra
The use of Mixed-Integer Linear Programming (MILP) models to represent neural networks with Rectified Linear Unit (ReLU) activations has become increasingly widespread in the last decade.
no code implementations • 29 Apr 2023 • Joey Huchette, Gonzalo Muñoz, Thiago Serra, Calvin Tsay
In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing.
no code implementations • 19 Jan 2023 • Junyang Cai, Khai-Nguyen Nguyen, Nishant Shrestha, Aidan Good, Ruisen Tu, Xin Yu, Shandian Zhe, Thiago Serra
One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy.
no code implementations • 7 Jun 2022 • Aidan Good, Jiaqi Lin, Hannah Sieg, Mikey Ferguson, Xin Yu, Shandian Zhe, Jerzy Wieczorek, Thiago Serra
In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model.
no code implementations • 28 May 2022 • Alexandre M. Florio, Pedro Martins, Maximilian Schiffer, Thiago Serra, Thibaut Vidal
Decision diagrams for classification have some notable advantages over decision trees, as their internal connections can be determined at training time and their width is not bound to grow exponentially with their depth.
1 code implementation • 9 Mar 2022 • Xin Yu, Thiago Serra, Srikumar Ramalingam, Shandian Zhe
We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights.
1 code implementation • 30 Jan 2022 • Carles Riera, Camilo Rey, Thiago Serra, Eloi Puertas, Oriol Pujol
Neural networks are more expressive when they have multiple layers.
1 code implementation • NeurIPS 2021 • Thiago Serra, Xin Yu, Abhinav Kumar, Srikumar Ramalingam
We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable.
no code implementations • 1 Jan 2020 • Thiago Serra, Abhinav Kumar, Srikumar Ramalingam
Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy.
no code implementations • 27 May 2019 • Abhinav Kumar, Thiago Serra, Srikumar Ramalingam
On the practical side, we show that certain rectified linear units (ReLUs) can be safely removed from a network if they are always active or inactive for any valid input.
no code implementations • ICLR 2019 • Thiago Serra, Srikumar Ramalingam
Our first contribution is a method to sample the activation patterns defined by ReLUs using universal hash functions.
no code implementations • 17 Jun 2018 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
The holy grail of deep learning is to come up with an automatic method to design optimal architectures for different applications.
no code implementations • 6 Nov 2017 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions.