no code implementations • ECCV 2020 • Kang Il Lee, Jung Ho Jeon, Byung Cheol Song
In augmented reality (AR) or virtual reality (VR) systems, eye tracking is a key technology and requires significant accuracy as well as real-time operation.
no code implementations • 21 Apr 2023 • Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song
As a result, the proposed method is able to deal with more examples in the adaptation process than inductive ones, which can result in better classification performance of the model.
2 code implementations • 25 Sep 2022 • Daeha Kim, Byung Cheol Song
Specifically, to find pairs of similar expressions from different identities, we define the inter-feature similarity as a transportation cost.
Facial Expression Recognition Facial Expression Recognition (FER)
2 code implementations • 9 Jun 2022 • Sungwook Lee, SeungHyun Lee, Byung Cheol Song
In addition, this paper points out the negative effects of biased features of pre-trained CNNs and emphasizes the importance of the adaptation to the target dataset.
Ranked #24 on Anomaly Detection on MVTec AD
1 code implementation • 5 Mar 2022 • SeungHyun Lee, Byung Cheol Song
EKG utilized for the following search iteration is composed of the ensemble knowledge of interim sub-networks, i. e., the by-products of the sub-network evaluation.
5 code implementations • 27 Dec 2021 • Seung Hoon Lee, SeungHyun Lee, Byung Cheol Song
However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias.
1 code implementation • 20 Oct 2021 • Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song
Experimental results show that CxGrad effectively encourages the backbone to learn task-specific knowledge in the inner-loop and improves the performance of MAML up to a significant margin in both same- and cross-domain few-shot classification.
1 code implementation • 28 Apr 2021 • SeungHyun Lee, Byung Cheol Song
Knowledge distillation (KD) is one of the most useful techniques for light-weight neural networks.
no code implementations • 29 Aug 2019 • Dae Ha Kim, Seung Hyun Lee, Byung Cheol Song
However, unsupervised multi-task learning can be biased to a specific task.
2 code implementations • 4 Jul 2019 • Seunghyun Lee, Byung Cheol Song
Knowledge distillation (KD) is a technique to derive optimal performance from a small student network (SN) by distilling knowledge of a large teacher network (TN) and transferring the distilled knowledge to the small SN.
3 code implementations • ECCV 2018 • Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song
To solve deep neural network (DNN)'s huge training dataset and its high computation issue, so-called teacher-student (T-S) DNN which transfers the knowledge of T-DNN to S-DNN has been proposed.