Search Results for author: Saehoon Kim

Found 20 papers, 12 papers with code

Variational Distribution Learning for Unsupervised Text-to-Image Generation

no code implementations CVPR 2023 Minsoo Kang, Doyup Lee, Jiseob Kim, Saehoon Kim, Bohyung Han

We propose a text-to-image generation algorithm based on deep neural networks when text captions for images are unavailable during training.

Image Captioning Text-to-Image Generation +2

Generalizable Implicit Neural Representations via Instance Pattern Composers

1 code implementation CVPR 2023 Chiheon Kim, Doyup Lee, Saehoon Kim, Minsu Cho, Wook-Shin Han

Despite recent advances in implicit neural representations (INRs), it remains challenging for a coordinate-based multi-layer perceptron (MLP) of INRs to learn a common representation across data instances and generalize it for unseen instances.

Meta-Learning

Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer

no code implementations9 Jun 2022 Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han

After code stacks in the sequence are randomly masked, Contextual RQ-Transformer is trained to infill the masked code stacks based on the unmasked contexts of the image.

Conditional Image Generation Text-to-Image Generation

Autoregressive Image Generation using Residual Quantization

3 code implementations CVPR 2022 Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han

However, we postulate that previous VQ cannot shorten the code sequence and generate high-fidelity images together in terms of the rate-distortion trade-off.

Conditional Image Generation Quantization +1

Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity

1 code implementation ICLR 2022 Byungseok Roh, Jaewoong Shin, Wuhyun Shin, Saehoon Kim

Deformable DETR uses the multiscale feature to ameliorate performance, however, the number of encoder tokens increases by 20x compared to DETR, and the computation cost of the encoder attention remains a bottleneck.

Computational Efficiency object-detection +1

Hybrid Generative-Contrastive Representation Learning

1 code implementation11 Jun 2021 Saehoon Kim, Sungwoong Kim, Juho Lee

On the other hand, the generative pre-training directly estimates the data distribution, so the representations tend to be robust but not optimal for discriminative tasks.

Contrastive Learning Representation Learning

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks

1 code implementation ICLR 2020 Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang

While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed.

Bayesian Inference Meta-Learning +1

Bayesian Optimization with Approximate Set Kernels

no code implementations23 May 2019 Jungtaek Kim, Michael McCourt, Tackgeun You, Saehoon Kim, Seungjin Choi

We propose a practical Bayesian optimization method over sets, to minimize a black-box function that takes a set as a single input.

Bayesian Optimization

MxML: Mixture of Meta-Learners for Few-Shot Classification

no code implementations11 Apr 2019 Minseop Park, Jungtaek Kim, Saehoon Kim, Yanbin Liu, Seungjin Choi

A meta-model is trained on a distribution of similar tasks such that it learns an algorithm that can quickly adapt to a novel task with only a handful of labeled examples.

Classification General Classification +1

Scalable and Order-robust Continual Learning with Additive Parameter Decomposition

1 code implementation ICLR 2020 Jaehong Yoon, Saehoon Kim, Eunho Yang, Sung Ju Hwang

First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks.

Continual Learning Fairness +1

ADAPTIVE NETWORK SPARSIFICATION VIA DEPENDENT VARIATIONAL BETA-BERNOULLI DROPOUT

no code implementations27 Sep 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

Deep Mixed Effect Model using Gaussian Processes: A Personalized and Reliable Prediction for Healthcare

2 code implementations5 Jun 2018 Ingyo Chung, Saehoon Kim, Juho Lee, Kwang Joon Kim, Sung Ju Hwang, Eunho Yang

We present a personalized and reliable prediction model for healthcare, which can provide individually tailored medical services such as diagnosis, disease treatment, and prevention.

Gaussian Processes Time Series +1

Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

1 code implementation28 May 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

DropMax: Adaptive Variational Softmax

4 code implementations NeurIPS 2018 Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, Sung Ju Hwang

Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes.

Classification General Classification +1

Learning to Warm-Start Bayesian Hyperparameter Optimization

no code implementations17 Oct 2017 Jungtaek Kim, Saehoon Kim, Seungjin Choi

A simple alternative of manual search is random/grid search on a space of hyperparameters, which still undergoes extensive evaluations of validation errors in order to find its best configuration.

Bayesian Optimization Hyperparameter Optimization +1

Bilinear Random Projections for Locality-Sensitive Binary Codes

no code implementations CVPR 2015 Saehoon Kim, Seungjin Choi

In this paper we analyze a bilinear random projection method where feature matrices are transformed to binary codes by two smaller random projection matrices.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.