1 code implementation • 8 Dec 2023 • Hanjung Kim, Jaehyun Kang, Miran Heo, Sukjun Hwang, Seoung Wug Oh, Seon Joo Kim
By effectively resolving the over-reliance on location information, we achieve state-of-the-art results on YouTube-VIS 2019/2021 and Occluded VIS (OVIS).
1 code implementation • CVPR 2023 • Miran Heo, Sukjun Hwang, Jeongseok Hyun, Hanjung Kim, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Notably, we greatly outperform the state-of-the-art on the long VIS benchmark (OVIS), improving 5. 6 AP with ResNet-50 backbone.
Ranked #6 on Video Instance Segmentation on YouTube-VIS 2021 (using extra training data)
1 code implementation • 9 Jun 2022 • Miran Heo, Sukjun Hwang, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens.
Ranked #11 on Video Instance Segmentation on YouTube-VIS 2021 (using extra training data)
1 code implementation • CVPR 2022 • Sukjun Hwang, Miran Heo, Seoung Wug Oh, Seon Joo Kim
The set classifier is plug-and-playable to existing object trackers, and highly improves the performance of long-tailed object tracking.
1 code implementation • NeurIPS 2021 • Sukjun Hwang, Miran Heo, Seoung Wug Oh, Seon Joo Kim
We propose a novel end-to-end solution for video instance segmentation (VIS) based on transformers.
Ranked #32 on Video Instance Segmentation on YouTube-VIS validation
1 code implementation • CVPR 2021 • Gunhee Nam, Miran Heo, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Since the existing datasets are not suitable to validate our method, we build a new polygonal point set tracking dataset and demonstrate the superior performance of our method over the baselines and existing contour-based VOS methods.