no code implementations • 21 Mar 2024 • Insung Kong, Yongdai Kim
Bayesian approaches for training deep neural networks (BNNs) have received significant interest and have been effectively utilized in a wide range of applications.
no code implementations • 8 Aug 2023 • Dongyoon Yang, Kunwoong Kim, Yongdai Kim
Semi-supervised learning (SSL) algorithm is a setup built upon a realistic assumption that access to a large amount of labeled data is tough.
1 code implementation • ICCV 2023 • Dongyoon Yang, Insung Kong, Yongdai Kim
For example, our algorithm with only 8\% labeled data is comparable to supervised adversarial training algorithms that use all labeled data, both in terms of standard and robust accuracies on CIFAR-10.
no code implementations • 29 May 2023 • Ilsang Ohn, Lizhen Lin, Yongdai Kim
In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred.
1 code implementation • 24 May 2023 • Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim
Bayesian approaches for learning deep neural networks (BNN) have been received much attention and successfully applied to various applications.
1 code implementation • 23 May 2023 • Insung Kong, Yuha Park, Joonhyuk Jung, Kwonsang Lee, Yongdai Kim
However, the existing weighting methods have desirable theoretical properties only when a certain model, either the propensity score or outcome regression model, is correctly specified.
no code implementations • 20 Jan 2023 • Sara Kim, Kyusang Yu, Yongdai Kim
We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups.
no code implementations • 11 Jan 2023 • Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim
This study aims to solve the unsupervised outlier detection problem where training data contain outliers, but any label information about inliers and outliers is not given.
1 code implementation • 7 Jun 2022 • Dongyoon Yang, Insung Kong, Yongdai Kim
Adversarial training, which is to enhance robustness against adversarial attacks, has received much attention because it is easy to generate human-imperceptible perturbations of data to deceive a given deep neural network.
no code implementations • 2 Jun 2022 • Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Yongdai Kim
As data size and computing power increase, the architectures of deep neural networks (DNNs) have been getting more complex and huge, and thus there is a growing need to simplify such complex and huge DNNs.
1 code implementation • 7 Feb 2022 • Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim
That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).
1 code implementation • 7 Feb 2022 • Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim
As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.
no code implementations • 29 Sep 2021 • Yongdai Kim, Sara Kim, Seonghyeon Kim, Kunwoong Kim
To ensure fairness on test data, we develop computationally efficient learning algorithms robust to sampling bias.
no code implementations • 29 Jun 2021 • Dongha Kim, Yongchan Choi, Kunwoong Kim, Yongdai Kim
By carrying out various experiments, we demonstrate that the INN method resolves the shortcomings in the memorization effect successfully and thus is helpful to construct more accurate deep prediction models with training data with noisy labels.
no code implementations • 9 May 2021 • Minwoo Chae, Dongha Kim, Yongdai Kim, Lizhen Lin
In the considered model, a usual likelihood approach can fail to estimate the target distribution consistently due to the singularity.
1 code implementation • 4 Dec 2020 • Minjin Kim, Young-geun Kim, Dongha Kim, Yongdai Kim, Myunghee Cho Paik
The Mixup method (Zhang et al. 2018), which uses linearly interpolated data, has emerged as an effective data augmentation tool to improve generalization performance and the robustness to adversarial examples.
no code implementations • 26 Mar 2020 • Ilsang Ohn, Yongdai Kim
Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems.
no code implementations • 15 Sep 2019 • Dongha Kim, Yongchan Choi, Yongdai Kim
In semi-supervised learning, virtual adversarial training (VAT) approach is one of the most attractive method due to its intuitional simplicity and powerful performances.
no code implementations • 17 Jun 2019 • Ilsang Ohn, Yongdai Kim
Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.
no code implementations • 21 Dec 2018 • Jong-June Jeon, Yongdai Kim, Sungho Won, Hosik Choi
To reflect these characteristics, a specific regularized regression model with linear constraints is commonly used.
no code implementations • 10 Dec 2018 • Yongdai Kim, Ilsang Ohn, Dongha Kim
In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.
no code implementations • 2 Dec 2018 • Yongdai Kim, Dongha Kim
We provide a theoretical explanation of the role of the number of nodes at each layer in deep neural networks.
no code implementations • 27 Sep 2018 • Dongha Kim, Yongchan Choi, Jae-Joon Han, Changkyu Choi, Yongdai Kim
The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT.