1 code implementation • 1 Mar 2024 • Alfred Nilsson, Klas Wijk, Sai Bharath Chandra Gutha, Erik Englesson, Alexandra Hotti, Carlo Saccardi, Oskar Kviman, Jens Lagergren, Ricardo Vinuesa, Hossein Azizpour
Feature selection is a crucial task in settings where data is high-dimensional or acquiring the full set of features is costly.
1 code implementation • 6 Apr 2023 • Erik Englesson, Amir Mehrpanah, Hossein Azizpour
A natural way of estimating heteroscedastic label noise in regression is to model the observed (potentially noisy) target as a sample from a normal distribution, whose parameters can be learned by minimizing the negative log-likelihood.
1 code implementation • 21 Sep 2022 • Matteo Gamba, Erik Englesson, Mårten Björkman, Hossein Azizpour
The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error.
no code implementations • 4 Oct 2021 • Erik Englesson, Hossein Azizpour
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning.
1 code implementation • NeurIPS 2021 • Erik Englesson, Hossein Azizpour
Prior works have found it beneficial to combine provably noise-robust loss functions e. g., mean absolute error (MAE) with standard categorical loss function e. g. cross entropy (CE) to improve their learnability.
Ranked #17 on Image Classification on mini WebVision 1.0
no code implementations • 12 Jun 2019 • Erik Englesson, Hossein Azizpour
In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks.