no code implementations • 29 May 2023 • Ilsang Ohn, Lizhen Lin, Yongdai Kim
In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred.
1 code implementation • 24 May 2023 • Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim
Bayesian approaches for learning deep neural networks (BNN) have been received much attention and successfully applied to various applications.
no code implementations • 16 Feb 2023 • Yihao Fang, Ilsang Ohn, Vijay Gupta, Lizhen Lin
We propose extrinsic and intrinsic deep neural network architectures as general frameworks for deep learning on manifolds.
no code implementations • 16 Jun 2022 • Ilsang Ohn
We propose a new Bayesian nonparametric prior for latent feature models, which we call the convergent Indian buffet process (CIBP).
no code implementations • 2 Jun 2022 • Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Yongdai Kim
As data size and computing power increase, the architectures of deep neural networks (DNNs) have been getting more complex and huge, and thus there is a growing need to simplify such complex and huge DNNs.
1 code implementation • 7 Feb 2022 • Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim
As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.
1 code implementation • 7 Feb 2022 • Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim
That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).
no code implementations • 7 Sep 2021 • Ilsang Ohn, Lizhen Lin
It turns out that this combined variational posterior is the closest member to the posterior over the entire model in a predefined family of approximating distributions.
no code implementations • 26 Mar 2020 • Ilsang Ohn, Yongdai Kim
Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems.
no code implementations • 17 Jun 2019 • Ilsang Ohn, Yongdai Kim
Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.
no code implementations • 10 Dec 2018 • Yongdai Kim, Ilsang Ohn, Dongha Kim
In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.