Search Results for author: Yongil Kim

Found 5 papers, 0 papers with code

Modality Alignment between Deep Representations for Effective Video-and-Language Learning

no code implementations LREC 2022 Hyeongu Yun, Yongil Kim, Kyomin Jung

Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities.

Question Answering Video Captioning +1

Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources

no code implementations9 Nov 2023 Yerin Hwang, Yongil Kim, Hyunkyung Bae, Jeesoo Bang, Hwanhee Lee, Kyomin Jung

To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed.

Conversational Question Answering Re-Ranking

PR-MCS: Perturbation Robust Metric for MultiLingual Image Captioning

no code implementations15 Mar 2023 Yongil Kim, Yerin Hwang, Hyeongu Yun, Seunghyun Yoon, Trung Bui, Kyomin Jung

Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning.

Image Captioning

Developing High Quality Training Samples for Deep Learning Based Local Climate Zone Classification in Korea

no code implementations3 Nov 2020 Minho Kim, Doyoung Jeong, Hyoungwoo Choi, Yongil Kim

Two out of three people will be living in urban areas by 2050, as projected by the United Nations, emphasizing the need for sustainable urban development and monitoring.

Domain Adaptation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.