Search Results for author: Dongha Kim

Found 10 papers, 2 papers with code

Optimizing Quantum Convolutional Neural Network Architectures for Arbitrary Data Dimension

no code implementations28 Mar 2024 Changwon Lee, Israel F. Araujo, Dongha Kim, Junghan Lee, Siheon Park, Ju-Young Ryu, Daniel K. Park

Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis.

Quantum Machine Learning

ODIM: an efficient method to detect outliers via inlier-memorization effect of deep generative models

no code implementations11 Jan 2023 Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim

This study aims to solve the unsupervised outlier detection problem where training data contain outliers, but any label information about inliers and outliers is not given.

Memorization Outlier Detection

Learning fair representation with a parametric integral probability metric

1 code implementation7 Feb 2022 Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim

That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).

Decision Making Fairness +1

INN: A Method Identifying Clean-annotated Samples via Consistency Effect in Deep Neural Networks

no code implementations29 Jun 2021 Dongha Kim, Yongchan Choi, Kunwoong Kim, Yongdai Kim

By carrying out various experiments, we demonstrate that the INN method resolves the shortcomings in the memorization effect successfully and thus is helpful to construct more accurate deep prediction models with training data with noisy labels.

Memorization

A likelihood approach to nonparametric estimation of a singular distribution using deep generative models

no code implementations9 May 2021 Minwoo Chae, Dongha Kim, Yongdai Kim, Lizhen Lin

In the considered model, a usual likelihood approach can fail to estimate the target distribution consistently due to the singularity.

Kernel-convoluted Deep Neural Networks with Data Augmentation

1 code implementation4 Dec 2020 Minjin Kim, Young-geun Kim, Dongha Kim, Yongdai Kim, Myunghee Cho Paik

The Mixup method (Zhang et al. 2018), which uses linearly interpolated data, has emerged as an effective data augmentation tool to improve generalization performance and the robustness to adversarial examples.

Data Augmentation

Understanding and Improving Virtual Adversarial Training

no code implementations15 Sep 2019 Dongha Kim, Yongchan Choi, Yongdai Kim

In semi-supervised learning, virtual adversarial training (VAT) approach is one of the most attractive method due to its intuitional simplicity and powerful performances.

Fast convergence rates of deep neural networks for classification

no code implementations10 Dec 2018 Yongdai Kim, Ilsang Ohn, Dongha Kim

In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.

Classification General Classification

On variation of gradients of deep neural networks

no code implementations2 Dec 2018 Yongdai Kim, Dongha Kim

We provide a theoretical explanation of the role of the number of nodes at each layer in deep neural networks.

Fast adversarial training for semi-supervised learning

no code implementations27 Sep 2018 Dongha Kim, Yongchan Choi, Jae-Joon Han, Changkyu Choi, Yongdai Kim

The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.