Search Results for author: Minkyu Kim

Found 19 papers, 7 papers with code

Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity

no code implementations5 Mar 2024 Hagyeong Lee, Minkyu Kim, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, Jaeho Lee

Recent advances in text-guided image compression have shown great potential to enhance the perceptual quality of reconstructed images.

Image Compression

KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations

no code implementations3 Mar 2024 Sunjun Kweon, Byungjin Choi, Minkyu Kim, Rae Woong Park, Edward Choi

We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023.

Multiple-choice Multiple Choice Question Answering (MCQA)

QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference

1 code implementation15 Feb 2024 Taesu Kim, Jongho Lee, Daehyun Ahn, Sarang Kim, Jiwoong Choi, Minkyu Kim, HyungJun Kim

We introduce QUICK, a group of novel optimized CUDA kernels for the efficient inference of quantized Large Language Models (LLMs).

Quantization

Image Clustering Conditioned on Text Criteria

1 code implementation27 Oct 2023 Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, Kangwook Lee

Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind.

Clustering Image Clustering

Squeezing Large-Scale Diffusion Models for Mobile

no code implementations3 Jul 2023 Jiwoong Choi, Minkyu Kim, Daehyun Ahn, Taesu Kim, Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Jae-Joon Kim, HyungJun Kim

The emergence of diffusion models has greatly broadened the scope of high-fidelity image synthesis, resulting in notable advancements in both practical implementation and academic research.

Image Generation

S-CLIP: Semi-supervised Vision-Language Learning using Few Specialist Captions

1 code implementation NeurIPS 2023 Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jinwoo Shin

By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics.

Contrastive Learning Partial Label Learning +3

Prefix tuning for automated audio captioning

1 code implementation30 Mar 2023 Minkyu Kim, Kim Sung-Bin, Tae-Hyun Oh

Audio captioning aims to generate text descriptions from environmental sounds.

AudioCaps Audio captioning +2

Discovering and Mitigating Visual Biases through Keyword Explanation

1 code implementation26 Jan 2023 Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin

The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.

Image Classification Image Generation

Explicit Feature Interaction-aware Graph Neural Networks

no code implementations7 Apr 2022 Minkyu Kim, Hyun-Soo Choi, Jinho Kim

However, existing graph neural networks only learn higher-order feature interactions implicitly.

Node Classification

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Global-Local Item Embedding for Temporal Set Prediction

no code implementations5 Sep 2021 Seungjae Jung, Young-Jin Park, Jisu Jeong, Kyung-Min Kim, Hiun Kim, Minkyu Kim, Hanock Kwak

Temporal set prediction is becoming increasingly important as many companies employ recommender systems in their online businesses, e. g., personalized purchase prediction of shopping baskets.

Recommendation Systems

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

One4all User Representation for Recommender Systems in E-commerce

no code implementations24 May 2021 Kyuyong Shin, Hanock Kwak, Kyung-Min Kim, Minkyu Kim, Young-Jin Park, Jisu Jeong, Seungjae Jung

General-purpose representation learning through large-scale pre-training has shown promising results in the various machine learning fields.

Computational Efficiency Recommendation Systems +2

NSML: Meet the MLaaS platform with a real-world case study

no code implementations8 Oct 2018 Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, Nako Sung, Jung-Woo Ha

The boom of deep learning induced many industries and academies to introduce machine learning based approaches into their concern, competitively.

BIG-bench Machine Learning Management

NSML: A Machine Learning Platform That Enables You to Focus on Your Models

no code implementations16 Dec 2017 Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Dong-Hyun Kwak, Jung-Woo Ha, Sunghun Kim

However, researchers are still required to perform a non-trivial amount of manual tasks such as GPU allocation, training status tracking, and comparison of models with different hyperparameter settings.

BIG-bench Machine Learning

Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs

no code implementations29 Sep 2016 R. Tapiador, A. Rios-Navarro, A. Linares-Barranco, Minkyu Kim, Deepak Kadetotad, Jae-sun Seo

Many-core GPU architectures show superior performance but they consume high power and also have memory constraints due to inconsistencies between cache and main memory.

Cannot find the paper you are looking for? You can Submit a new open access paper.