Search Results for author: Ilsang Ohn

Found 11 papers, 3 papers with code

A Bayesian sparse factor model with adaptive posterior concentration

no code implementations29 May 2023 Ilsang Ohn, Lizhen Lin, Yongdai Kim

In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred.

Bayesian Inference

Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference

1 code implementation24 May 2023 Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim

Bayesian approaches for learning deep neural networks (BNN) have been received much attention and successfully applied to various applications.

Bayesian Inference Uncertainty Quantification

Intrinsic and extrinsic deep learning on manifolds

no code implementations16 Feb 2023 Yihao Fang, Ilsang Ohn, Vijay Gupta, Lizhen Lin

We propose extrinsic and intrinsic deep neural network architectures as general frameworks for deep learning on manifolds.

The convergent Indian buffet process

no code implementations16 Jun 2022 Ilsang Ohn

We propose a new Bayesian nonparametric prior for latent feature models, which we call the convergent Indian buffet process (CIBP).

Masked Bayesian Neural Networks : Computation and Optimality

no code implementations2 Jun 2022 Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Yongdai Kim

As data size and computing power increase, the architectures of deep neural networks (DNNs) have been getting more complex and huge, and thus there is a growing need to simplify such complex and huge DNNs.

Uncertainty Quantification

SLIDE: a surrogate fairness constraint to ensure fairness consistency

1 code implementation7 Feb 2022 Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim

As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.

Fairness valid

Learning fair representation with a parametric integral probability metric

1 code implementation7 Feb 2022 Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim

That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).

Decision Making Fairness +1

Adaptive variational Bayes: Optimality, computation and applications

no code implementations7 Sep 2021 Ilsang Ohn, Lizhen Lin

It turns out that this combined variational posterior is the closest member to the posterior over the entire model in a predefined family of approximating distributions.

Model Selection

Nonconvex sparse regularization for deep neural networks and its optimality

no code implementations26 Mar 2020 Ilsang Ohn, Yongdai Kim

Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems.

regression

Smooth function approximation by deep neural networks with general activation functions

no code implementations17 Jun 2019 Ilsang Ohn, Yongdai Kim

Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.

General Classification

Fast convergence rates of deep neural networks for classification

no code implementations10 Dec 2018 Yongdai Kim, Ilsang Ohn, Dongha Kim

In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.