no code implementations • 18 Jan 2024 • Gwanhyeong Koo, Sunjae Yoon, Chang D. Yoo
To address this, we introduce an innovative method that maintains the principles of the NTI while accelerating the image editing process.
1 code implementation • 15 Dec 2023 • Sunjae Yoon, Dahyun Kim, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chnag D. Yoo
Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history.
1 code implementation • 10 Dec 2023 • Hyun Ryu, Sunjae Yoon, Hee Suk Yoon, Eunseop Yoon, Chang D. Yoo
Our experimental results support that SimPSI considerably enhances the performance of time series data augmentations by preserving core spectral information.
no code implementations • 10 Dec 2023 • Sunjae Yoon, Gwanhyeong Koo, Ji Woo Hong, Chang D. Yoo
To this end, this paper proposes Neutral Editing (NeuEdit) framework to enable complex non-rigid editing by changing the motion of a person/object in a video, which has never been attempted before.
no code implementations • ICCV 2023 • Sunjae Yoon, Gwanhyeong Koo, Dahyun Kim, Chang D. Yoo
These proposals are assumed to contain many distinguishable scenes in a video as candidates.
1 code implementation • 4 Mar 2023 • Hee Suk Yoon, Joshua Tian Jin Tee, Eunseop Yoon, Sunjae Yoon, Gwangsu Kim, Yingzhen Li, Chang D. Yoo
Studies have shown that modern neural networks tend to be poorly calibrated due to over-confident predictions.
no code implementations • 14 Dec 2022 • Hee Suk Yoon, Eunseop Yoon, John Harvill, Sunjae Yoon, Mark Hasegawa-Johnson, Chang D. Yoo
To the best of our knowledge, this is the first attempt to apply mixup in NLP while preserving the meaning of a specific word.
no code implementations • 12 Dec 2022 • Sunjae Yoon, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chang D. Yoo
Despite the recent success of multi-modal reasoning to generate answer sentences, existing dialogue systems still suffer from a text hallucination problem, which denotes indiscriminate text-copying from input texts without an understanding of the question.
1 code implementation • 17 Oct 2022 • Sunjae Yoon, Ji Woo Hong, Eunseop Yoon, Dahyun Kim, Junyeong Kim, Hee Suk Yoon, Chang D. Yoo
Video moment retrieval (VMR) aims to localize target moments in untrimmed videos pertinent to a given textual query.
no code implementations • 24 Mar 2021 • Junyeong Kim, Sunjae Yoon, Dahyun Kim, Chang D. Yoo
A video-grounded dialogue system referred to as the Structured Co-reference Graph Attention (SCGA) is presented for decoding the answer sequence to a question regarding a given video while keeping track of the dialogue context.
1 code implementation • ECCV 2020 • Minuk Ma, Sunjae Yoon, Junyeong Kim, Young-Joon Lee, Sunghun Kang, Chang D. Yoo
This paper explores methods for performing VMR in a weakly-supervised manner (wVMR): training is performed without temporal moment labels but only with the text query that describes a segment of the video.