Search Results for author: Sangryul Jeon

Found 14 papers, 5 papers with code

Guided Semantic Flow

no code implementations ECCV 2020 Sangryul Jeon, Dongbo Min, Seungryong Kim, Jihwan Choe, Kwanghoon Sohn

Establishing dense semantic correspondences requires dealing with large geometric variations caused by the unconstrained setting of images.

Semantic correspondence

Zero-shot Building Attribute Extraction from Large-Scale Vision and Language Models

no code implementations19 Dec 2023 Fei Pan, Sangryul Jeon, Brian Wang, Frank Mckenna, Stella X. Yu

The proposed workflow contains two key components: image-level captioning and segment-level captioning for the building images based on the vocabularies pertinent to structural and civil engineering.

Attribute Attribute Extraction +1

Local-Guided Global: Paired Similarity Representation for Visual Reinforcement Learning

no code implementations CVPR 2023 Hyesong Choi, Hunsang Lee, Wonil Song, Sangryul Jeon, Kwanghoon Sohn, Dongbo Min

Recent vision-based reinforcement learning (RL) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies.

Atari Games reinforcement-learning +3

Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence

1 code implementation6 Oct 2022 Sunghwan Hong, Jisu Nam, Seokju Cho, Susung Hong, Sangryul Jeon, Dongbo Min, Seungryong Kim

Existing pipelines of semantic correspondence commonly include extracting high-level semantic features for the invariance against intra-class variations and background clutters.

Semantic correspondence

Unsupervised Scene Sketch to Photo Synthesis

1 code implementation6 Sep 2022 Jiayun Wang, Sangryul Jeon, Stella X. Yu, Xi Zhang, Himanshu Arora, Yu Lou

Taking this advantage, we synthesize a photo-realistic image by combining the structure of a sketch and the visual style of a reference photo.

Self-Supervised Structured Representations for Deep Reinforcement Learning

no code implementations29 Sep 2021 Hyesong Choi, Hunsang Lee, Wonil Song, Sangryul Jeon, Kwanghoon Sohn, Dongbo Min

The proposed method imposes similarity constraints on the three latent volumes; warped query representations by estimated flows, predicted target representations from the transition model, and target representations of future state.

Atari Games Image Reconstruction +3

Weakly-Supervised Learning of Disentangled and Interpretable Skills for Hierarchical Reinforcement Learning

no code implementations29 Sep 2021 Wonil Song, Sangryul Jeon, Hyesong Choi, Kwanghoon Sohn, Dongbo Min

Given the latent representations as skills, a skill-based policy network is trained to generate similar trajectories to the learned decoder of the trajectory VAE.

Hierarchical Reinforcement Learning Inductive Bias +3

CATs: Cost Aggregation Transformers for Visual Correspondence

1 code implementation NeurIPS 2021 Seokju Cho, Sunghwan Hong, Sangryul Jeon, Yunsung Lee, Kwanghoon Sohn, Seungryong Kim

We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations.

Semantic correspondence

Joint Learning of Semantic Alignment and Object Landmark Detection

no code implementations ICCV 2019 Sangryul Jeon, Dongbo Min, Seungryong Kim, Kwanghoon Sohn

Based on the key insight that the two tasks can mutually provide supervisions to each other, our networks accomplish this through a joint loss function that alternatively imposes a consistency constraint between the two tasks, thereby boosting the performance and addressing the lack of training data in a principled manner.

Object

Semantic Attribute Matching Networks

no code implementations CVPR 2019 Seungryong Kim, Dongbo Min, Somi Jeong, Sunok Kim, Sangryul Jeon, Kwanghoon Sohn

SAM-Net accomplishes this through an iterative process of establishing reliable correspondences by reducing the attribute discrepancy between the images and synthesizing attribute transferred images using the learned correspondences.

Attribute

Recurrent Transformer Networks for Semantic Correspondence

1 code implementation NeurIPS 2018 Seungryong Kim, Stephen Lin, Sangryul Jeon, Dongbo Min, Kwanghoon Sohn

Our networks accomplish this through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations.

General Classification Semantic correspondence

PARN: Pyramidal Affine Regression Networks for Dense Semantic Correspondence

no code implementations ECCV 2018 Sangryul Jeon, Seungryong Kim, Dongbo Min, Kwanghoon Sohn

To the best of our knowledge, it is the first work that attempts to estimate dense affine transformation fields in a coarse-to-fine manner within deep networks.

regression Semantic correspondence

FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence

1 code implementation CVPR 2017 Seungryong Kim, Dongbo Min, Bumsub Ham, Sangryul Jeon, Stephen Lin, Kwanghoon Sohn

The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner.

Object Semantic correspondence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.