Search Results for author: Yongdai Kim

Found 23 papers, 7 papers with code

Posterior concentrations of fully-connected Bayesian neural networks with general priors on the weights

no code implementations21 Mar 2024 Insung Kong, Yongdai Kim

Bayesian approaches for training deep neural networks (BNNs) have received significant interest and have been effectively utilized in a wide range of applications.

Improving Performance of Semi-Supervised Learning by Adversarial Attacks

no code implementations8 Aug 2023 Dongyoon Yang, Kunwoong Kim, Yongdai Kim

Semi-supervised learning (SSL) algorithm is a setup built upon a realistic assumption that access to a large amount of labeled data is tough.

Adversarial Robustness Image Classification

Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation

1 code implementation ICCV 2023 Dongyoon Yang, Insung Kong, Yongdai Kim

For example, our algorithm with only 8\% labeled data is comparable to supervised adversarial training algorithms that use all labeled data, both in terms of standard and robust accuracies on CIFAR-10.

Adversarial Robustness Knowledge Distillation

A Bayesian sparse factor model with adaptive posterior concentration

no code implementations29 May 2023 Ilsang Ohn, Lizhen Lin, Yongdai Kim

In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred.

Bayesian Inference

Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference

1 code implementation24 May 2023 Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim

Bayesian approaches for learning deep neural networks (BNN) have been received much attention and successfully applied to various applications.

Bayesian Inference Uncertainty Quantification

Covariate balancing using the integral probability metric for causal inference

1 code implementation23 May 2023 Insung Kong, Yuha Park, Joonhyuk Jung, Kwonsang Lee, Yongdai Kim

However, the existing weighting methods have desirable theoretical properties only when a certain model, either the propensity score or outcome regression model, is correctly specified.

Causal Inference regression

Within-group fairness: A guidance for more sound between-group fairness

no code implementations20 Jan 2023 Sara Kim, Kyusang Yu, Yongdai Kim

We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups.

Decision Making Fairness

ODIM: an efficient method to detect outliers via inlier-memorization effect of deep generative models

no code implementations11 Jan 2023 Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim

This study aims to solve the unsupervised outlier detection problem where training data contain outliers, but any label information about inliers and outliers is not given.

Memorization Outlier Detection

Improving Adversarial Robustness by Putting More Regularizations on Less Robust Samples

1 code implementation7 Jun 2022 Dongyoon Yang, Insung Kong, Yongdai Kim

Adversarial training, which is to enhance robustness against adversarial attacks, has received much attention because it is easy to generate human-imperceptible perturbations of data to deceive a given deep neural network.

Adversarial Robustness

Masked Bayesian Neural Networks : Computation and Optimality

no code implementations2 Jun 2022 Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Yongdai Kim

As data size and computing power increase, the architectures of deep neural networks (DNNs) have been getting more complex and huge, and thus there is a growing need to simplify such complex and huge DNNs.

Uncertainty Quantification

Learning fair representation with a parametric integral probability metric

1 code implementation7 Feb 2022 Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim

That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).

Decision Making Fairness +1

SLIDE: a surrogate fairness constraint to ensure fairness consistency

1 code implementation7 Feb 2022 Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim

As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.

Fairness valid

$L_q$ regularization for Fairness AI robust to sampling bias

no code implementations29 Sep 2021 Yongdai Kim, Sara Kim, Seonghyeon Kim, Kunwoong Kim

To ensure fairness on test data, we develop computationally efficient learning algorithms robust to sampling bias.

Fairness

INN: A Method Identifying Clean-annotated Samples via Consistency Effect in Deep Neural Networks

no code implementations29 Jun 2021 Dongha Kim, Yongchan Choi, Kunwoong Kim, Yongdai Kim

By carrying out various experiments, we demonstrate that the INN method resolves the shortcomings in the memorization effect successfully and thus is helpful to construct more accurate deep prediction models with training data with noisy labels.

Memorization

A likelihood approach to nonparametric estimation of a singular distribution using deep generative models

no code implementations9 May 2021 Minwoo Chae, Dongha Kim, Yongdai Kim, Lizhen Lin

In the considered model, a usual likelihood approach can fail to estimate the target distribution consistently due to the singularity.

Kernel-convoluted Deep Neural Networks with Data Augmentation

1 code implementation4 Dec 2020 Minjin Kim, Young-geun Kim, Dongha Kim, Yongdai Kim, Myunghee Cho Paik

The Mixup method (Zhang et al. 2018), which uses linearly interpolated data, has emerged as an effective data augmentation tool to improve generalization performance and the robustness to adversarial examples.

Data Augmentation

Nonconvex sparse regularization for deep neural networks and its optimality

no code implementations26 Mar 2020 Ilsang Ohn, Yongdai Kim

Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems.

regression

Understanding and Improving Virtual Adversarial Training

no code implementations15 Sep 2019 Dongha Kim, Yongchan Choi, Yongdai Kim

In semi-supervised learning, virtual adversarial training (VAT) approach is one of the most attractive method due to its intuitional simplicity and powerful performances.

Smooth function approximation by deep neural networks with general activation functions

no code implementations17 Jun 2019 Ilsang Ohn, Yongdai Kim

Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.

General Classification

Primal path algorithm for compositional data analysis

no code implementations21 Dec 2018 Jong-June Jeon, Yongdai Kim, Sungho Won, Hosik Choi

To reflect these characteristics, a specific regularized regression model with linear constraints is commonly used.

General Classification regression

Fast convergence rates of deep neural networks for classification

no code implementations10 Dec 2018 Yongdai Kim, Ilsang Ohn, Dongha Kim

In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.

Classification General Classification

On variation of gradients of deep neural networks

no code implementations2 Dec 2018 Yongdai Kim, Dongha Kim

We provide a theoretical explanation of the role of the number of nodes at each layer in deep neural networks.

Fast adversarial training for semi-supervised learning

no code implementations27 Sep 2018 Dongha Kim, Yongchan Choi, Jae-Joon Han, Changkyu Choi, Yongdai Kim

The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.