Search Results for author: Sunjae Yoon

Found 11 papers, 5 papers with code

Wavelet-Guided Acceleration of Text Inversion in Diffusion-Based Image Editing

no code implementations18 Jan 2024 Gwanhyeong Koo, Sunjae Yoon, Chang D. Yoo

To address this, we introduce an innovative method that maintains the principles of the NTI while accelerating the image editing process.

Text-based Image Editing

HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue

1 code implementation15 Dec 2023 Sunjae Yoon, Dahyun Kim, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chnag D. Yoo

Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history.

SimPSI: A Simple Strategy to Preserve Spectral Information in Time Series Data Augmentation

1 code implementation10 Dec 2023 Hyun Ryu, Sunjae Yoon, Hee Suk Yoon, Eunseop Yoon, Chang D. Yoo

Our experimental results support that SimPSI considerably enhances the performance of time series data augmentations by preserving core spectral information.

Data Augmentation Time Series

Neutral Editing Framework for Diffusion-based Video Editing

no code implementations10 Dec 2023 Sunjae Yoon, Gwanhyeong Koo, Ji Woo Hong, Chang D. Yoo

To this end, this paper proposes Neutral Editing (NeuEdit) framework to enable complex non-rigid editing by changing the motion of a person/object in a video, which has never been attempted before.

Style Transfer Video Editing

ESD: Expected Squared Difference as a Tuning-Free Trainable Calibration Measure

1 code implementation4 Mar 2023 Hee Suk Yoon, Joshua Tian Jin Tee, Eunseop Yoon, Sunjae Yoon, Gwangsu Kim, Yingzhen Li, Chang D. Yoo

Studies have shown that modern neural networks tend to be poorly calibrated due to over-confident predictions.

SMSMix: Sense-Maintained Sentence Mixup for Word Sense Disambiguation

no code implementations14 Dec 2022 Hee Suk Yoon, Eunseop Yoon, John Harvill, Sunjae Yoon, Mark Hasegawa-Johnson, Chang D. Yoo

To the best of our knowledge, this is the first attempt to apply mixup in NLP while preserving the meaning of a specific word.

Data Augmentation Sentence +1

Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue

no code implementations12 Dec 2022 Sunjae Yoon, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chang D. Yoo

Despite the recent success of multi-modal reasoning to generate answer sentences, existing dialogue systems still suffer from a text hallucination problem, which denotes indiscriminate text-copying from input texts without an understanding of the question.

Hallucination Sentence

Selective Query-guided Debiasing for Video Corpus Moment Retrieval

1 code implementation17 Oct 2022 Sunjae Yoon, Ji Woo Hong, Eunseop Yoon, Dahyun Kim, Junyeong Kim, Hee Suk Yoon, Chang D. Yoo

Video moment retrieval (VMR) aims to localize target moments in untrimmed videos pertinent to a given textual query.

Moment Retrieval Retrieval +1

Structured Co-reference Graph Attention for Video-grounded Dialogue

no code implementations24 Mar 2021 Junyeong Kim, Sunjae Yoon, Dahyun Kim, Chang D. Yoo

A video-grounded dialogue system referred to as the Structured Co-reference Graph Attention (SCGA) is presented for decoding the answer sequence to a question regarding a given video while keeping track of the dialogue context.

Graph Attention

VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval

1 code implementation ECCV 2020 Minuk Ma, Sunjae Yoon, Junyeong Kim, Young-Joon Lee, Sunghun Kang, Chang D. Yoo

This paper explores methods for performing VMR in a weakly-supervised manner (wVMR): training is performed without temporal moment labels but only with the text query that describes a segment of the video.

Contrastive Learning Moment Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.