Search Results for author: Zhong-Yu Li

Found 5 papers, 4 papers with code

Enhancing Representations through Heterogeneous Self-Supervised Learning

no code implementations8 Oct 2023 Zhong-Yu Li, Bo-Wen Yin, Yongxiang Liu, Li Liu, Ming-Ming Cheng

Thus, we propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model.

Image Classification Instance Segmentation +5

RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks

2 code implementations14 Jun 2022 ShangHua Gao, Zhong-Yu Li, Qi Han, Ming-Ming Cheng, Liang Wang

Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combinations further.

Action Segmentation Instance Segmentation +5

SERE: Exploring Feature Self-relation for Self-supervised Transformer

1 code implementation10 Jun 2022 Zhong-Yu Li, ShangHua Gao, Ming-Ming Cheng

Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i. e., spatial/channel self-relations, for self-supervised learning.

Relation Self-Supervised Learning +1

Large-scale Unsupervised Semantic Segmentation

3 code implementations6 Jun 2021 ShangHua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, Philip Torr

In this work, we propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to help the research progress.

Representation Learning Segmentation +1

Global2Local: Efficient Structure Search for Video Action Segmentation

2 code implementations CVPR 2021 Shang-Hua Gao, Qi Han, Zhong-Yu Li, Pai Peng, Liang Wang, Ming-Ming Cheng

Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combination patterns further.

Action Segmentation Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.