no code implementations • 28 Mar 2024 • Changwon Lee, Israel F. Araujo, Dongha Kim, Junghan Lee, Siheon Park, Ju-Young Ryu, Daniel K. Park
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis.
no code implementations • 11 Jan 2023 • Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim
This study aims to solve the unsupervised outlier detection problem where training data contain outliers, but any label information about inliers and outliers is not given.
1 code implementation • 7 Feb 2022 • Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim
That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i. e., using the representation as the input).
no code implementations • 29 Jun 2021 • Dongha Kim, Yongchan Choi, Kunwoong Kim, Yongdai Kim
By carrying out various experiments, we demonstrate that the INN method resolves the shortcomings in the memorization effect successfully and thus is helpful to construct more accurate deep prediction models with training data with noisy labels.
no code implementations • 9 May 2021 • Minwoo Chae, Dongha Kim, Yongdai Kim, Lizhen Lin
In the considered model, a usual likelihood approach can fail to estimate the target distribution consistently due to the singularity.
1 code implementation • 4 Dec 2020 • Minjin Kim, Young-geun Kim, Dongha Kim, Yongdai Kim, Myunghee Cho Paik
The Mixup method (Zhang et al. 2018), which uses linearly interpolated data, has emerged as an effective data augmentation tool to improve generalization performance and the robustness to adversarial examples.
no code implementations • 15 Sep 2019 • Dongha Kim, Yongchan Choi, Yongdai Kim
In semi-supervised learning, virtual adversarial training (VAT) approach is one of the most attractive method due to its intuitional simplicity and powerful performances.
no code implementations • 10 Dec 2018 • Yongdai Kim, Ilsang Ohn, Dongha Kim
In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the condition that the conditional class probabilities of most data are sufficiently close to either 1 or zero.
no code implementations • 2 Dec 2018 • Yongdai Kim, Dongha Kim
We provide a theoretical explanation of the role of the number of nodes at each layer in deep neural networks.
no code implementations • 27 Sep 2018 • Dongha Kim, Yongchan Choi, Jae-Joon Han, Changkyu Choi, Yongdai Kim
The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT.