Search Results for author: Sungbin Lim

Found 17 papers, 9 papers with code

Threshold-aware Learning to Generate Feasible Solutions for Mixed Integer Programs

no code implementations1 Aug 2023 Taehyun Yoon, Jinwon Choi, Hyokun Yun, Sungbin Lim

Our study investigates that a specific range of variable assignment rates (coverage) yields high-quality feasible solutions, where we suggest optimizing the coverage bridges the gap between the learning and MIP objectives.

Combinatorial Optimization

Bag of Tricks for In-Distribution Calibration of Pretrained Transformers

1 code implementation13 Feb 2023 Jaeyoung Kim, Dongbin Na, Sungchul Choi, Sungbin Lim

We find that the ensemble model overfitted to the training set shows sub-par calibration performance and also observe that PLMs trained with confidence penalty loss have a trade-off between calibration and accuracy.

Data Augmentation Ensemble Learning +2

Neural Bootstrapping Attention for Neural Processes

no code implementations29 Sep 2021 Minsub Lee, Junhyun Park, Sojin Jang, Chanhui Lee, Hyungjoo Cho, Minsuk Shin, Sungbin Lim

Recently, Bootstrapping (Attentive) Neural Processes (B(A)NP) propose a bootstrap method to capture the functional uncertainty which can replace the latent variable in (Attentive) Neural Processes ((A)NP), thus overcoming the limitations of Gaussian assumption on the latent variable.

Bayesian Optimization Decision Making

Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards

no code implementations NeurIPS 2020 Kyungjae Lee, Hongjun Yang, Sungbin Lim, Songhwai Oh

In simulation, the proposed estimator shows favorable performance compared to existing robust estimators for various $p$ values and, for MAB problems, the proposed perturbation strategy outperforms existing exploration methods.

Multi-Armed Bandits

Neural Bootstrapper

2 code implementations NeurIPS 2021 Minsuk Shin, Hyungjoo Cho, Hyun-seok Min, Sungbin Lim

Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics.

Active Learning BIG-bench Machine Learning +3

torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models

3 code implementations21 Apr 2020 Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim

We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019).

Scalable Neural Architecture Search for 3D Medical Image Segmentation

no code implementations13 Jun 2019 Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, Taesup Kim

In this paper, a neural architecture search (NAS) framework is proposed for 3D medical image segmentation, to automatically optimize a neural architecture from a large design space.

Image Segmentation Medical Image Segmentation +3

Fast AutoAugment

11 code implementations NeurIPS 2019 Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, Sungwoong Kim

Data augmentation is an essential technique for improving generalization ability of deep learning models.

Image Augmentation Image Classification

Tsallis Reinforcement Learning: A Unified Framework for Maximum Entropy Reinforcement Learning

no code implementations31 Jan 2019 Kyungjae Lee, Sungyub Kim, Sungbin Lim, Sungjoon Choi, Songhwai Oh

By controlling the entropic index, we can generate various types of entropy, including the SG entropy, and a different entropy results in a different class of the optimal policy in Tsallis MDPs.

reinforcement-learning Reinforcement Learning (RL)

ChoiceNet: Robust Learning by Revealing Output Correlations

no code implementations27 Sep 2018 Sungjoon Choi, Sanghoon Hong, Kyungjae Lee, Sungbin Lim

To this end, we present a novel framework referred to here as ChoiceNet that can robustly infer the target distribution in the presence of inconsistent data.

regression

Task Agnostic Robust Learning on Corrupt Outputs by Correlation-Guided Mixture Density Networks

1 code implementation CVPR 2020 Sungjoon Choi, Sanghoon Hong, Kyungjae Lee, Sungbin Lim

In this paper, we focus on weakly supervised learning with noisy training data for both classification and regression problems. We assume that the training outputs are collected from a mixture of a target and correlated noise distributions. Our proposed method simultaneously estimates the target distribution and the quality of each data which is defined as the correlation between the target and data generating distributions. The cornerstone of the proposed method is a Cholesky Block that enables modeling dependencies among mixture distributions in a differentiable manner where we maintain the distribution over the network weights. We first provide illustrative examples in both regression and classification tasks to show the effectiveness of the proposed method. Then, the proposed method is extensively evaluated in a number of experiments where we show that it constantly shows comparable or superior performances compared to existing baseline methods in the handling of noisy data.

Autonomous Driving General Classification +2

Neural Stain-Style Transfer Learning using GAN for Histopathological Images

1 code implementation23 Oct 2017 Hyungjoo Cho, Sungbin Lim, Gunho Choi, Hyun-seok Min

Consequently our model does not only transfers initial stain-styles to the desired one but also prevent the degradation of tumor classifier on transferred images.

General Classification Style Transfer +1

Uncertainty-Aware Learning from Demonstration using Mixture Density Networks with Sampling-Free Variance Modeling

1 code implementation3 Sep 2017 Sungjoon Choi, Kyungjae Lee, Sungbin Lim, Songhwai Oh

The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.

Autonomous Driving

Cannot find the paper you are looking for? You can Submit a new open access paper.