Search Results for author: Sunghun Kang

Found 5 papers, 2 papers with code

Learning Pseudo-Labeler beyond Noun Concepts for Open-Vocabulary Object Detection

no code implementations4 Dec 2023 Sunghun Kang, Junbum Cha, Jonghwan Mun, Byungseok Roh, Chang D. Yoo

Specifically, the proposed method aims to learn arbitrary image-to-text mapping for pseudo-labeling of arbitrary concepts, named Pseudo-Labeling for Arbitrary Concepts (PLAC).

object-detection Open Vocabulary Object Detection +2

Semantic Grouping Network for Video Captioning

1 code implementation1 Feb 2021 Hobin Ryu, Sunghun Kang, Haeyong Kang, Chang D. Yoo

This paper considers a video caption generating network referred to as Semantic Grouping Network (SGN) that attempts (1) to group video frames with discriminating word phrases of partially decoded caption and then (2) to decode those semantically aligned groups in predicting the next word.

Video Captioning

VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval

1 code implementation ECCV 2020 Minuk Ma, Sunjae Yoon, Junyeong Kim, Young-Joon Lee, Sunghun Kang, Chang D. Yoo

This paper explores methods for performing VMR in a weakly-supervised manner (wVMR): training is performed without temporal moment labels but only with the text query that describes a segment of the video.

Contrastive Learning Moment Retrieval +1

Pivot Correlational Neural Network for Multimodal Video Categorization

no code implementations ECCV 2018 Sunghun Kang, Junyeong Kim, Hyun-Soo Choi, Sungjin Kim, Chang D. Yoo

The architecture is trained to maximizes the correlation between the hidden states as well as the predictions of the modal-agnostic pivot stream and modal-specific stream in the network.

A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit

no code implementations ICLR 2019 Seong Jin Cho, Sunghun Kang, Chang D. Yoo

Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search.

Cannot find the paper you are looking for? You can Submit a new open access paper.