no code implementations • 11 Mar 2022 • Thomas Verelst, Paul K. Rubenstein, Marcin Eichner, Tinne Tuytelaars, Maxim Berman
We show that adding a consistency loss, ensuring that the predictions of the network are consistent over consecutive training epochs, is a simple yet effective method to train multi-label classifiers in a weakly supervised setting.
no code implementations • 26 Oct 2020 • Tom Eelbode, Jeroen Bertels, Maxim Berman, Dirk Vandermeulen, Frederik Maes, Raf Bisschops, Matthew B. Blaschko
We verify these results empirically in an extensive validation on six medical segmentation tasks and can confirm that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index.
1 code implementation • CVPR 2020 • Maxim Berman, Leonid Pishchulin, Ning Xu, Matthew B. Blaschko, Gerard Medioni
We introduce a novel efficient one-shot NAS approach to optimally search for channel numbers, given latency constraints on a specific hardware.
no code implementations • 25 Nov 2019 • Maxim Berman, Matthew B. Blaschko
In order to constrain such a model to remain tractable, previous approaches have enforced the weight vector to be positive for pairwise potentials in which the labels differ, and set pairwise potentials to zero in the case that the label remains the same.
1 code implementation • 5 Nov 2019 • Jeroen Bertels, Tom Eelbode, Maxim Berman, Dirk Vandermeulen, Frederik Maes, Raf Bisschops, Matthew Blaschko
First, we investigate the theoretical differences in a risk minimization framework and question the existence of a weighted cross-entropy loss with weights theoretically optimized to surrogate Dice or Jaccard.
no code implementations • 23 Jul 2019 • Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia
The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore.
no code implementations • 11 Mar 2019 • Thomas Verelst, Matthew Blaschko, Maxim Berman
Superpixel algorithms are a common pre-processing step for computer vision algorithms such as segmentation, object tracking and localization.
3 code implementations • 14 Feb 2019 • Maxim Berman, Hervé Jégou, Andrea Vedaldi, Iasonas Kokkinos, Matthijs Douze
When fed to a linear classifier, the learned embeddings provide state-of-the-art classification accuracy.
Ranked #1 on Image Retrieval on INRIA Holidays
no code implementations • 6 Sep 2018 • Maxim Berman, Matthew B. Blaschko, Amal Rannen Triki, Jiaqian Yu
This note is a response to [7] in which it is claimed that [13, Proposition 11] is false.
1 code implementation • 7 Jun 2018 • Mathijs Schuurmans, Maxim Berman, Matthew B. Blaschko
In this work, we evaluate the use of superpixel pooling layers in deep network architectures for semantic segmentation.
2 code implementations • CVPR 2018 • Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko
The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.
no code implementations • 18 Oct 2017 • Amal Rannen Triki, Maxim Berman, Matthew B. Blaschko
Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems.
4 code implementations • CVPR 2018 • Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko
The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.
Ranked #34 on Semantic Segmentation on PASCAL VOC 2012 test