Search Results for author: Sungjoo Yoo

Found 20 papers, 8 papers with code

Breaking MLPerf Training: A Case Study on Optimizing BERT

no code implementations4 Feb 2024 YongDeok Kim, Jaehyung Ahn, Myeongwoo Kim, Changin Choi, Heejae Kim, Narankhuu Tuvshinjargal, Seungwon Lee, Yanzi Zhang, Yuan Pei, Xiongzhan Linghu, Jingkun Ma, Lin Chen, Yuehua Dai, Sungjoo Yoo

Speeding up the large-scale distributed training is challenging in that it requires improving various components of training including load balancing, communication, optimizers, etc.

Hyperparameter Optimization

MetaMix: Meta-state Precision Searcher for Mixed-precision Activation Quantization

no code implementations12 Nov 2023 Han-Byul Kim, Joo Hyung Lee, Sungjoo Yoo, Hong-Seok Kim

Mixed-precision quantization of efficient networks often suffer from activation instability encountered in the exploration of bit selections.

Quantization

MFOS: Model-Free & One-Shot Object Pose Estimation

no code implementations3 Oct 2023 Jongmin Lee, Yohann Cabon, Romain Brégier, Sungjoo Yoo, Jerome Revaud

Existing learning-based methods for object pose estimation in RGB images are mostly model-specific or category based.

Object Pose Estimation

AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural Representation

no code implementations CVPR 2023 Hyunyoung Jung, Zhuo Hui, Lei Luo, Haitao Yang, Feng Liu, Sungjoo Yoo, Rakesh Ranjan, Denis Demandolx

To apply optical flow in practice, it is often necessary to resize the input to smaller dimensions in order to reduce computational costs.

Optical Flow Estimation

Memory Efficient Patch-based Training for INR-based GANs

no code implementations4 Jul 2022 Namwoo Lee, Hyunsu Kim, Gayoung Lee, Sungjoo Yoo, Yunjey Choi

However, training existing approaches require a heavy computational cost proportional to the image resolution, since they compute an MLP operation for every (x, y) coordinate.

Image Outpainting Super-Resolution

NIPQ: Noise proxy-based Integrated Pseudo-Quantization

1 code implementation CVPR 2023 JunCheol Shin, Junhyuk So, Sein Park, Seungyeop Kang, Sungjoo Yoo, Eunhyeok Park

Recently, pseudoquantization training has been proposed as an alternative approach to updating the learnable parameters using the pseudo-quantization noise instead of STE.

Quantization

On the Overlooked Significance of Underutilized Contextual Features in Recent News Recommendation Models

no code implementations29 Dec 2021 Sungmin Cho, Hongjun Lim, Keunchan Park, Sungjoo Yoo, Eunhyeok Park

Personalized news recommendation aims to provide attractive articles for readers by predicting their likelihood of clicking on a certain article.

News Recommendation

Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing

1 code implementation CVPR 2021 Hyunsu Kim, Yunjey Choi, Junho Kim, Sungjoo Yoo, Youngjung Uh

Although manipulating the latent vectors controls the synthesized outputs, editing real images with GANs suffers from i) time-consuming optimization for projecting real images to the latent vectors, ii) or inaccurate embedding through an encoder.

Image Manipulation valid

A StyleMap-Based Generator for Real-Time Image Projection and Local Editing

no code implementations1 Jan 2021 Hyunsu Kim, Yunjey Choi, Junho Kim, Sungjoo Yoo, Youngjung Uh

State-of-the-art GAN-based methods for editing real images suffer from time-consuming operations in projecting real images to latent vectors.

Image Manipulation

MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings for Sequential Recommendation

1 code implementation19 Aug 2020 Sung Min Cho, Eunhyeok Park, Sungjoo Yoo

Following the custom from language processing, most of these models rely on a simple positional embedding to exploit the sequential nature of the user's history.

Sequential Recommendation

PROFIT: A Novel Training Method for sub-4-bit MobileNet Models

1 code implementation ECCV 2020 Eunhyeok Park, Sungjoo Yoo

In the ablation study of the 3-bit quantization of MobileNet-v3, our proposed method outperforms the state-of-the-art method by a large margin, 12. 86 % of top-1 accuracy.

Quantization

Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss

2 code implementations ICCV 2019 Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, Sungjoo Yoo

A GAN approach is proposed, called Tag2Pix, of line art colorization which takes as input a grayscale line art and color tag information and produces a quality colored image.

Line Art Colorization TAG

Precision Highway for Ultra Low-Precision Quantization

no code implementations ICLR 2019 Eunhyeok Park, Dongyoung Kim, Sungjoo Yoo, Peter Vajda

We also report that the proposed method significantly outperforms the existing method in the 2-bit quantization of an LSTM for language modeling.

Language Modelling Quantization

Value-aware Quantization for Training and Inference of Neural Networks

no code implementations ECCV 2018 Eunhyeok Park, Sungjoo Yoo, Peter Vajda

We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large data in high precision, which reduces total quantization errors under very low precision.

Quantization

Weighted-Entropy-Based Quantization for Deep Neural Networks

no code implementations CVPR 2017 Eunhyeok Park, Junwhan Ahn, Sungjoo Yoo

Quantization is considered as one of the most effective methods to optimize the inference cost of neural network models for their deployment to mobile and embedded systems, which have tight resource constraints.

Image Classification Language Modelling +3

Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications

7 code implementations20 Nov 2015 Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin

Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging.

Cannot find the paper you are looking for? You can Submit a new open access paper.