no code implementations • 14 Apr 2024 • Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, Adrian Benton
Despite the remarkable strides made by autoregressive language models, their potential is often hampered by the slow inference speeds inherent in sequential token generation.
no code implementations • 19 Mar 2024 • Taehyeon Kim, Byung-Cheol Min
In this paper, we introduce Semantic Layering in Room Segmentation via LLMs (SeLRoS), an advanced method for semantic room segmentation by integrating Large Language Models (LLMs) with traditional 2D map-based segmentation.
no code implementations • 10 Feb 2024 • Marc Bartholet, Taehyeon Kim, Ami Beuret, Se-Young Yun, Joachim M. Buhmann
We propose an innovative federated algorithm, termed hFedF for hypernetwork-based Federated Fusion, designed to bridge the performance gap between generalization and personalization, capable of addressing various degrees of domain shift.
no code implementations • 8 Feb 2024 • Taehyeon Kim, Donggyu Kim, Se-Young Yun
In the evolving landscape of federated learning (FL), addressing label noise presents unique challenges due to the decentralized and diverse nature of data collection across clients.
1 code implementation • 18 Dec 2023 • Yongjin Yang, Taehyeon Kim, Se-Young Yun
Second, to address the pitfalls of noisy statistics, we deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.
1 code implementation • 1 Nov 2023 • Taehyeon Kim, Joonkee Kim, Gihun Lee, Se-Young Yun
Notably, utilizing 'opposite' as the noisy instruction in ID, which exhibits the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks.
1 code implementation • 26 Oct 2023 • Taehyeon Kim, Eric Lin, Junu Lee, Christian Lau, Vaikkunth Mugunthan
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy.
2 code implementations • 5 Dec 2022 • Taehyeon Kim, Shinhwan Kang, Hyeonjeong Shin, Deukryeol Yoon, Seongha Eom, Kijung Shin, Se-Young Yun
The Weather4Cast competition (hosted by NeurIPS 2022) required competitors to predict super-resolution rain movies in various regions of Europe when low-resolution satellite contexts covering wider regions are given.
1 code implementation • 30 Jun 2022 • Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun
Historically, this challenge has been tackled using numerical weather prediction (NWP) models, grounded on physics-based simulations.
no code implementations • 27 Jun 2022 • Taehyeon Kim, Heesoo Myeong, Se-Young Yun
Knowledge Distillation (KD) has recently emerged as a popular method for compressing neural networks.
1 code implementation • IEEE Access 2022 • Taehyeon Kim, Se-Young Yun
Recent research in deep Convolutional Neural Networks(CNN) faces the challenges of vanishing/exploding gradient issues, training instability, and feature redundancy.
1 code implementation • 3 Jun 2022 • Taehyeon Kim, Se-Young Yun
The approach is inspired by observing that averaging parameters during model aggregation for FL is similar to weight-sharing in supernet training.
no code implementations • 8 Apr 2022 • Stephen Cha, Taehyeon Kim, Hayeon Lee, Se-Young Yun
The survey analyses supernet optimization methods based on their approaches to spatial and temporal optimization.
1 code implementation • 2 Feb 2022 • Jaeyeon Ahn, Taehyeon Kim, Seyoung Yun
Real-world optimization problems are generally not just black-box problems, but also involve mixed types of inputs in which discrete and continuous variables coexist.
1 code implementation • 19 May 2021 • Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun
From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model.
1 code implementation • NeurIPS 2021 • Taehyeon Kim, Jongwoo Ko, Sangwook Cho, Jinhwan Choi, Se-Young Yun
Our framework, coined as filtering noisy instances via their eigenvectors (FINE), provides a robust detector with derivative-free simple methods having theoretical guarantees.
Ranked #2 on Image Classification on WebVision
no code implementations • 1 Jan 2021 • Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun
To verify this conjecture, we test an extreme logit learning model, where the KD is implemented with Mean Squared Error (MSE) between the student's logit and the teacher's logit.
no code implementations • 7 Dec 2020 • Taehyeon Kim, Jaeyeon Ahn, Nakyil Kim, Seyoung Yun
In the machine learning algorithms, the choice of the hyperparameter is often an art more than a science, requiring labor-intensive search with expert experience.
no code implementations • 6 Dec 2020 • Taehyeon Kim, Sangmin Bae, Jin-woo Lee, Seyoung Yun
Federated learning has emerged as an innovative paradigm of collaborative machine learning.
1 code implementation • 1 Feb 2020 • Taehyeon Kim, Jonghyup Kim, Seyoung Yun
Our final score is 0. 0054, which represents 370x improvements over the baseline for the CIFAR100 dataset.