no code implementations • 5 May 2023 • Philippe Carvalho, Alexandre Durupt, Yves GRANDVALET
The field of industrial defect detection using machine learning and deep learning is a subject of active research.
no code implementations • 23 Oct 2020 • Gabriel Frisch, Jean-Benoist Léger, Yves GRANDVALET
Missing data can be informative.
no code implementations • 10 Aug 2020 • Abdelhak Loukkal, Yves GRANDVALET, Tom Drummond, You Li
Camera-based end-to-end driving neural networks bring the promise of a low-cost system that maps camera images to driving control commands.
no code implementations • 13 Jul 2020 • Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou
We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.
3 code implementations • ICML 2018 • Xuhong Li, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
no code implementations • ICLR 2018 • Xuhong LI, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
2 code implementations • 2 Jun 2015 • Alberto Garcia-Duran, Antoine Bordes, Nicolas Usunier, Yves GRANDVALET
This paper tackles the problem of endogenous link prediction for Knowledge Base completion.
no code implementations • 1 May 2015 • Shameem A Puthiya Parambath, Nicolas Usunier, Yves GRANDVALET
We study the theoretical properties of a subset of non-linear performance measures called pseudo-linear performance measures which includes $F$-measure, \emph{Jaccard Index}, among many others.
no code implementations • NeurIPS 2014 • Shameem Puthiya Parambath, Nicolas Usunier, Yves GRANDVALET
We present a theoretical analysis of F-measures for binary, multiclass and multilabel classification.
no code implementations • 7 Oct 2012 • Yves Grandvalet, Julien Chiquet, Christophe Ambroise
We illustrate on real and artificial datasets that this accuracy is required to for the correctness of the support of the solution, which is an important element for the interpretability of sparsity-inducing penalties.
no code implementations • NeurIPS 2008 • Yves Grandvalet, Alain Rakotomamonjy, Joseph Keshet, Stéphane Canu
We consider the problem of binary classification where the classifier may abstain instead of classifying each observation.