1 code implementation • 3 Dec 2021 • Defeng Liu, Matteo Fischetti, Andrea Lodi
In this work, we study the relation between the size of the search neighborhood and the behavior of the underlying LB algorithm, and we devise a leaning based framework for predicting the best size for the specific instance to be solved.
2 code implementations • 4 Jun 2019 • Matteo Fischetti, Matteo Stringher
We propose a new metaheuristic training scheme that combines Stochastic Gradient Descent (SGD) and Discrete Optimization in an unconventional way.
no code implementations • 19 Jun 2018 • Matteo Fischetti, Iacopo Mandatelli, Domenico Salvagnin
It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization.
no code implementations • 17 Dec 2017 • Matteo Fischetti, Jason Jo
A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero.