Search Results for author: Dongwon Kim

Found 10 papers, 5 papers with code

Shatter and Gather: Learning Referring Image Segmentation with Text Supervision

1 code implementation ICCV 2023 Dongwon Kim, Namyup Kim, Cuiling Lan, Suha Kwak

Referring image segmentation, the task of segmenting any arbitrary entities described in free-form texts, opens up a variety of vision applications.

Image Segmentation Segmentation +2

Extending CLIP's Image-Text Alignment to Referring Image Segmentation

1 code implementation14 Jun 2023 Seoyeon Kim, Minguk Kang, Dongwon Kim, Jaesik Park, Suha Kwak

Referring Image Segmentation (RIS) is a cross-modal task that aims to segment an instance described by a natural language expression.

Ranked #3 on Referring Expression Segmentation on RefCOCO testA (using extra training data)

Image Segmentation Referring Expression Segmentation +2

Improving Cross-Modal Retrieval with Set of Diverse Embeddings

1 code implementation CVPR 2023 Dongwon Kim, Namyup Kim, Suha Kwak

It seeks to encode a sample into a set of different embedding vectors that capture different semantics of the sample.

Cross-Modal Retrieval Retrieval

Self-Taught Metric Learning without Labels

no code implementations CVPR 2022 Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak

At the heart of our framework lies an algorithm that investigates contexts of data on the embedding space to predict their class-equivalence relations as pseudo labels.

Metric Learning

ReSTR: Convolution-free Referring Image Segmentation Using Transformers

no code implementations CVPR 2022 Namyup Kim, Dongwon Kim, Cuiling Lan, Wenjun Zeng, Suha Kwak

Most of existing methods for this task rely heavily on convolutional neural networks, which however have trouble capturing long-range dependencies between entities in the language expression and are not flexible enough for modeling interactions between the two different modalities.

Image Segmentation Referring Expression Segmentation +2

Embedding Transfer with Label Relaxation for Improved Metric Learning

2 code implementations CVPR 2021 Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak

Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers them through a loss used for learning target embedding models.

Knowledge Distillation Metric Learning

Embedding Transfer via Smooth Contrastive Loss

no code implementations1 Jan 2021 Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak

To this end, we design a new loss called smooth contrastive loss, which pulls together or pushes apart a pair of samples in a target embedding space with strength determined by their semantic similarity in the source embedding space; an analysis of the loss reveals that this property enables more important pairs to contribute more to learning the target embedding space.

Metric Learning Semantic Similarity +1

Proxy Anchor Loss for Deep Metric Learning

3 code implementations CVPR 2020 Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak

The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity.

Ranked #10 on Metric Learning on CUB-200-2011 (using extra training data)

Fine-Grained Image Classification Fine-Grained Vehicle Classification +1

Semi-supervised Learning with Deep Generative Models for Asset Failure Prediction

no code implementations4 Sep 2017 Andre S. Yoon, Taehoon Lee, Yongsub Lim, Deokwoo Jung, Philgyun Kang, Dongwon Kim, Keuntae Park, Yongjin Choi

This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data.

Cannot find the paper you are looking for? You can Submit a new open access paper.