Search Results for author: Matteo Fischetti

Found 4 papers, 2 papers with code

Revisiting local branching with a machine learning lens

1 code implementation3 Dec 2021 Defeng Liu, Matteo Fischetti, Andrea Lodi

In this work, we study the relation between the size of the search neighborhood and the behavior of the underlying LB algorithm, and we devise a leaning based framework for predicting the best size for the specific instance to be solved.

BIG-bench Machine Learning

Embedded hyper-parameter tuning by Simulated Annealing

2 code implementations4 Jun 2019 Matteo Fischetti, Matteo Stringher

We propose a new metaheuristic training scheme that combines Stochastic Gradient Descent (SGD) and Discrete Optimization in an unconventional way.

Image Classification

Faster SGD training by minibatch persistency

no code implementations19 Jun 2018 Matteo Fischetti, Iacopo Mandatelli, Domenico Salvagnin

It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization.

Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study

no code implementations17 Dec 2017 Matteo Fischetti, Jason Jo

A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero.

Cannot find the paper you are looking for? You can Submit a new open access paper.