Search Results for author: Byung Cheol Song

Found 11 papers, 8 papers with code

Deep Learning-based Pupil Center Detection for Fast and Accurate Eye Tracking System

no code implementations ECCV 2020 Kang Il Lee, Jung Ho Jeon, Byung Cheol Song

In augmented reality (AR) or virtual reality (VR) systems, eye tracking is a key technology and requires significant accuracy as well as real-time operation.

Representation Learning

Task-Adaptive Pseudo Labeling for Transductive Meta-Learning

no code implementations21 Apr 2023 Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song

As a result, the proposed method is able to deal with more examples in the adaptation process than inductive ones, which can result in better classification performance of the model.

Meta-Learning

Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition

2 code implementations25 Sep 2022 Daeha Kim, Byung Cheol Song

Specifically, to find pairs of similar expressions from different identities, we define the inter-feature similarity as a transportation cost.

Facial Expression Recognition Facial Expression Recognition (FER)

CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly Localization

2 code implementations9 Jun 2022 Sungwook Lee, SeungHyun Lee, Byung Cheol Song

In addition, this paper points out the negative effects of biased features of pre-trained CNNs and emphasizes the importance of the adaptation to the target dataset.

Transfer Learning Unsupervised Anomaly Detection

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

1 code implementation5 Mar 2022 SeungHyun Lee, Byung Cheol Song

EKG utilized for the following search iteration is composed of the ensemble knowledge of interim sub-networks, i. e., the by-products of the sub-network evaluation.

Knowledge Distillation

Vision Transformer for Small-Size Datasets

5 code implementations27 Dec 2021 Seung Hoon Lee, SeungHyun Lee, Byung Cheol Song

However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias.

Image Classification Inductive Bias

Contextual Gradient Scaling for Few-Shot Learning

1 code implementation20 Oct 2021 Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song

Experimental results show that CxGrad effectively encourages the backbone to learn task-specific knowledge in the inner-loop and improves the performance of MAML up to a significant margin in both same- and cross-domain few-shot classification.

Cross-Domain Few-Shot

Graph-based Knowledge Distillation by Multi-head Attention Network

2 code implementations4 Jul 2019 Seunghyun Lee, Byung Cheol Song

Knowledge distillation (KD) is a technique to derive optimal performance from a small student network (SN) by distilling knowledge of a large teacher network (TN) and transferring the distilled knowledge to the small SN.

Inductive Bias Knowledge Distillation +1

Self-supervised Knowledge Distillation Using Singular Value Decomposition

3 code implementations ECCV 2018 Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song

To solve deep neural network (DNN)'s huge training dataset and its high computation issue, so-called teacher-student (T-S) DNN which transfers the knowledge of T-DNN to S-DNN has been proposed.

Knowledge Distillation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.