Search Results for author: Erik Englesson

Found 6 papers, 4 papers with code

Indirectly Parameterized Concrete Autoencoders

1 code implementation1 Mar 2024 Alfred Nilsson, Klas Wijk, Sai Bharath Chandra Gutha, Erik Englesson, Alexandra Hotti, Carlo Saccardi, Oskar Kviman, Jens Lagergren, Ricardo Vinuesa, Hossein Azizpour

Feature selection is a crucial task in settings where data is high-dimensional or acquiring the full set of features is costly.

feature selection

Logistic-Normal Likelihoods for Heteroscedastic Label Noise

1 code implementation6 Apr 2023 Erik Englesson, Amir Mehrpanah, Hossein Azizpour

A natural way of estimating heteroscedastic label noise in regression is to model the observed (potentially noisy) target as a sample from a normal distribution, whose parameters can be learned by minimizing the negative log-likelihood.

Classification

Deep Double Descent via Smooth Interpolation

1 code implementation21 Sep 2022 Matteo Gamba, Erik Englesson, Mårten Björkman, Hossein Azizpour

The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error.

Consistency Regularization Can Improve Robustness to Label Noise

no code implementations4 Oct 2021 Erik Englesson, Hossein Azizpour

Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning.

Self-Supervised Learning

Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Erik Englesson, Hossein Azizpour

Prior works have found it beneficial to combine provably noise-robust loss functions e. g., mean absolute error (MAE) with standard categorical loss function e. g. cross entropy (CE) to improve their learnability.

Learning with noisy labels

Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

no code implementations12 Jun 2019 Erik Englesson, Hossein Azizpour

In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.