Search Results for author: Taejong Joo

Found 5 papers, 1 papers with code

Revisiting Explicit Regularization in Neural Networks for Reliable Predictive Probability

no code implementations28 Sep 2020 Taejong Joo, Uijung Chung

In this work, we revisit the role and importance of explicit regularization methods for generalization of the predictive probability, not just the generalization of the 0-1 loss.

Memorization

Revisiting Explicit Regularization in Neural Networks for Well-Calibrated Predictive Uncertainty

no code implementations11 Jun 2020 Taejong Joo, Uijung Chung

From the statistical learning perspective, complexity control via explicit regularization is a necessity for improving the generalization of over-parameterized models.

Being Bayesian about Categorical Probability

1 code implementation ICML 2020 Taejong Joo, Uijung Chung, Min-Gwan Seo

Neural networks utilize the softmax as a building block in classification tasks, which contains an overconfidence problem and lacks an uncertainty representation ability.

Regularizing activations in neural networks via distribution matching with the Wasserstein metric

no code implementations ICLR 2020 Taejong Joo, Donggu Kang, Byunghoon Kim

We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution.

Image Classification Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.