no code implementations • 19 Nov 2023 • Chanhui Lee, Juhyeon Kim, Yongjun Jeong, Juhyun Lyu, Junghee Kim, Sangmin Lee, Sangjun Han, Hyeokjun Choe, Soyeon Park, Woohyung Lim, Sungbin Lim, Sanghack Lee
Scaling laws have allowed Pre-trained Language Models (PLMs) into the field of causal reasoning.
no code implementations • 1 Aug 2023 • Taehyun Yoon, Jinwon Choi, Hyokun Yun, Sungbin Lim
Our study investigates that a specific range of variable assignment rates (coverage) yields high-quality feasible solutions, where we suggest optimizing the coverage bridges the gap between the learning and MIP objectives.
1 code implementation • 13 Feb 2023 • Jaeyoung Kim, Dongbin Na, Sungchul Choi, Sungbin Lim
We find that the ensemble model overfitted to the training set shows sub-par calibration performance and also observe that PLMs trained with confidence penalty loss have a trade-off between calibration and accuracy.
1 code implementation • 22 Dec 2021 • Aigerim Bogyrbayeva, Taehyun Yoon, Hanbum Ko, Sungbin Lim, Hyokun Yun, Changhyun Kwon
Reinforcement learning has recently shown promise in learning quality solutions in many combinatorial optimization problems.
no code implementations • 29 Sep 2021 • Aigerim Bogyrbayeva, Taehyun Yoon, Hanbum Ko, Sungbin Lim, Hyokun Yun, Changhyun Kwon
State-less attention-based decoder fails to make such coordination between vehicles.
no code implementations • 29 Sep 2021 • Minsub Lee, Junhyun Park, Sojin Jang, Chanhui Lee, Hyungjoo Cho, Minsuk Shin, Sungbin Lim
Recently, Bootstrapping (Attentive) Neural Processes (B(A)NP) propose a bootstrap method to capture the functional uncertainty which can replace the latent variable in (Attentive) Neural Processes ((A)NP), thus overcoming the limitations of Gaussian assumption on the latent variable.
no code implementations • NeurIPS 2020 • Kyungjae Lee, Hongjun Yang, Sungbin Lim, Songhwai Oh
In simulation, the proposed estimator shows favorable performance compared to existing robust estimators for various $p$ values and, for MAB problems, the proposed perturbation strategy outperforms existing exploration methods.
2 code implementations • NeurIPS 2021 • Minsuk Shin, Hyungjoo Cho, Hyun-seok Min, Sungbin Lim
Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics.
1 code implementation • 9 May 2020 • Woonhyuk Baek, Ildoo Kim, Sungwoong Kim, Sungbin Lim
NeurIPS 2019 AutoDL challenge is a series of six automated machine learning competitions.
3 code implementations • 21 Apr 2020 • Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim
We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019).
no code implementations • 13 Jun 2019 • Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, Taesup Kim
In this paper, a neural architecture search (NAS) framework is proposed for 3D medical image segmentation, to automatically optimize a neural architecture from a large design space.
11 code implementations • NeurIPS 2019 • Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, Sungwoong Kim
Data augmentation is an essential technique for improving generalization ability of deep learning models.
Ranked #3 on Image Classification on SVHN
no code implementations • 31 Jan 2019 • Kyungjae Lee, Sungyub Kim, Sungbin Lim, Sungjoon Choi, Songhwai Oh
By controlling the entropic index, we can generate various types of entropy, including the SG entropy, and a different entropy results in a different class of the optimal policy in Tsallis MDPs.
no code implementations • 27 Sep 2018 • Sungjoon Choi, Sanghoon Hong, Kyungjae Lee, Sungbin Lim
To this end, we present a novel framework referred to here as ChoiceNet that can robustly infer the target distribution in the presence of inconsistent data.
1 code implementation • CVPR 2020 • Sungjoon Choi, Sanghoon Hong, Kyungjae Lee, Sungbin Lim
In this paper, we focus on weakly supervised learning with noisy training data for both classification and regression problems. We assume that the training outputs are collected from a mixture of a target and correlated noise distributions. Our proposed method simultaneously estimates the target distribution and the quality of each data which is defined as the correlation between the target and data generating distributions. The cornerstone of the proposed method is a Cholesky Block that enables modeling dependencies among mixture distributions in a differentiable manner where we maintain the distribution over the network weights. We first provide illustrative examples in both regression and classification tasks to show the effectiveness of the proposed method. Then, the proposed method is extensively evaluated in a number of experiments where we show that it constantly shows comparable or superior performances compared to existing baseline methods in the handling of noisy data.
1 code implementation • 23 Oct 2017 • Hyungjoo Cho, Sungbin Lim, Gunho Choi, Hyun-seok Min
Consequently our model does not only transfers initial stain-styles to the desired one but also prevent the degradation of tumor classifier on transferred images.
1 code implementation • 3 Sep 2017 • Sungjoon Choi, Kyungjae Lee, Sungbin Lim, Songhwai Oh
The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.