Search Results for author: Taesu Kim

Found 15 papers, 6 papers with code

QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference

1 code implementation15 Feb 2024 Taesu Kim, Jongho Lee, Daehyun Ahn, Sarang Kim, Jiwoong Choi, Minkyu Kim, HyungJun Kim

We introduce QUICK, a group of novel optimized CUDA kernels for the efficient inference of quantized Large Language Models (LLMs).

Quantization

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

1 code implementation14 Feb 2024 Jiwon Song, Kyungseok Oh, Taesu Kim, HyungJun Kim, Yulhwa Kim, Jae-Joon Kim

In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks.

Squeezing Large-Scale Diffusion Models for Mobile

no code implementations3 Jul 2023 Jiwoong Choi, Minkyu Kim, Daehyun Ahn, Taesu Kim, Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Jae-Joon Kim, HyungJun Kim

The emergence of diffusion models has greatly broadened the scope of high-fidelity image synthesis, resulting in notable advancements in both practical implementation and academic research.

Image Generation

OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models

2 code implementations4 Jun 2023 Changhun Lee, Jungyu Jin, Taesu Kim, HyungJun Kim, Eunhyeok Park

Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment.

Quantization

GP22: A Car Styling Dataset for Automotive Designers

no code implementations5 Jul 2022 Gyunpyo Lee, Taesu Kim, Hyeon-Jeong Suk

Therefore, we release GP22, composed of car styling features defined by automotive designers.

Autonomous Driving

EdiTTS: Score-based Editing for Controllable Text-to-Speech

1 code implementation6 Oct 2021 Jaesung Tae, Hyeongju Kim, Taesu Kim

We present EdiTTS, an off-the-shelf speech editing methodology based on score-based generative modeling for text-to-speech synthesis.

Speech Synthesis Text-To-Speech Synthesis

Large-scale Speaker Retrieval on Random Speaker Variability Subspace

no code implementations27 Nov 2018 Suwon Shon, Young-Gun Lee, Taesu Kim

In this paper, we proposed Random Speaker-variability Subspace (RSS) projection to map a data into LSH based hash tables.

Retrieval

Learning pronunciation from a foreign language in speech synthesis networks

2 code implementations23 Nov 2018 Young-Gun Lee, Suwon Shon, Taesu Kim

First, we train the speech synthesis network bilingually in English and Korean and analyze how the network learns the relations of phoneme pronunciation between the languages.

Speech Synthesis

Robust and fine-grained prosody control of end-to-end speech synthesis

1 code implementation6 Nov 2018 Young-Gun Lee, Taesu Kim

We propose prosody embeddings for emotional and expressive speech synthesis networks.

Expressive Speech Synthesis

Voice Imitating Text-to-Speech Neural Networks

no code implementations journal 2018 Young-Gun Lee, Taesu Kim, Soo-Young Lee

We propose a neural text-to-speech (TTS) model that can imitate a new speaker's voice using only a small amount of speech sample.

Sentence

Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio

no code implementations ICLR 2018 Dongsoo Lee, Daehyun Ahn, Taesu Kim, Pierce I. Chuang, Jae-Joon Kim

Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists.

Deep Neural Network Optimized to Resistive Memory with Nonlinear Current-Voltage Characteristics

no code implementations30 Mar 2017 Hyungjun Kim, Taesu Kim, Jinseok Kim, Jae-Joon Kim

Artificial Neural Network computation relies on intensive vector-matrix multiplications.

Cannot find the paper you are looking for? You can Submit a new open access paper.