Search Results for author: Xiaoyuan Yu

Found 5 papers, 1 papers with code

Multimodal High-order Relation Transformer for Scene Boundary Detection

no code implementations ICCV 2023 Xi Wei, Zhangxiang Shi, Tianzhu Zhang, Xiaoyuan Yu, Lei Xiao

Scene boundary detection breaks down long videos into meaningful story-telling units and plays a crucial role in high-level video understanding.

Boundary Detection Relation +1

A Close Look at Few-shot Real Image Super-resolution from the Distortion Relation Perspective

no code implementations25 Nov 2021 Xin Li, Xin Jin, Jun Fu, Xiaoyuan Yu, Bei Tong, Zhibo Chen

Under this brand-new scenario, we propose Distortion Relation guided Transfer Learning (DRTL) for the few-shot RealSR by transferring the rich restoration knowledge from auxiliary distortions (i. e., synthetic distortions) to the target RealSR under the guidance of distortion relation.

Image Restoration Image Super-Resolution +4

TA2N: Two-Stage Action Alignment Network for Few-shot Action Recognition

1 code implementation10 Jul 2021 Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei, Xiaoyuan Yu, Weiyao Lin

The first stage locates the action by learning a temporal affine transform, which warps each video feature to its action duration while dismissing the action-irrelevant feature (e. g. background).

Few-Shot action recognition Few Shot Action Recognition +2

Uncertainty Guided Collaborative Training for Weakly Supervised Temporal Action Detection

no code implementations CVPR 2021 Wenfei Yang, Tianzhu Zhang, Xiaoyuan Yu, Tian Qi, Yongdong Zhang, Feng Wu

To alleviate this problem, we propose a novel Uncertainty Guided Collaborative Training (UGCT) strategy, which mainly includes two key designs: (1) The first design is an online pseudo label generation module, in which the RGB and FLOW streams work collaboratively to learn from each other.

Action Detection Pseudo Label

Cannot find the paper you are looking for? You can Submit a new open access paper.