1 code implementation • 18 Jan 2024 • Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
1 code implementation • 4 Mar 2023 • Hee Suk Yoon, Joshua Tian Jin Tee, Eunseop Yoon, Sunjae Yoon, Gwangsu Kim, Yingzhen Li, Chang D. Yoo
Studies have shown that modern neural networks tend to be poorly calibrated due to over-confident predictions.
no code implementations • 13 Jun 2022 • Gwangsu Kim, Sangwook Kang
An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates.
no code implementations • 29 Sep 2021 • Seong Jin Cho, Gwangsu Kim, Chang D. Yoo
This strategy is valid only when the sample's "closeness" to the decision boundary can be estimated.
2 code implementations • 23 Sep 2021 • Junghyun Lee, Gwangsu Kim, Matt Olfat, Mark Hasegawa-Johnson, Chang D. Yoo
This paper defines fair principal component analysis (PCA) as minimizing the maximum mean discrepancy (MMD) between dimensionality-reduced conditional distributions of different protected classes.
no code implementations • 1 Jan 2021 • Seong Jin Cho, Gwangsu Kim, Chang D. Yoo
Active learning strategy to query unlabeled samples nearer the estimated decision boundary at each step has been known to be effective when the distance from the sample data to the decision boundary can be explicitly evaluated; however, in numerous cases in machine learning, especially when it involves deep learning, conventional distance such as the $\ell_p$ from sample to decision boundary is not readily measurable.