Search Results for author: Shipeng Yan

Found 12 papers, 7 papers with code

MILD: Modeling the Instance Learning Dynamics for Learning with Noisy Labels

1 code implementation20 Jun 2023 Chuanyang Hu, Shipeng Yan, Zhitong Gao, Xuming He

Despite deep learning has achieved great success, it often relies on a large amount of training data with accurate labels, which are expensive and time-consuming to collect.

Learning with noisy labels Memorization

General Incremental Learning with Domain-aware Categorical Representations

no code implementations CVPR 2022 Jiangwei Xie, Shipeng Yan, Xuming He

Continual learning is an important problem for achieving human-level intelligence in real-world applications as an agent must continuously accumulate knowledge in response to streaming data/tasks.

Class Incremental Learning Incremental Learning

Budget-aware Few-shot Learning via Graph Convolutional Network

no code implementations7 Jan 2022 Shipeng Yan, Songyang Zhang, Xuming He

In this work, we introduce a new budget-aware few-shot learning problem that not only aims to learn novel object categories, but also needs to select informative examples to annotate in order to achieve data efficiency.

Few-Shot Learning Informativeness

How Well Does Self-Supervised Pre-Training Perform with Streaming ImageNet?

no code implementations NeurIPS Workshop ImageNet_PPF 2021 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Self-Supervised Learning

An EM Framework for Online Incremental Learning of Semantic Segmentation

1 code implementation8 Aug 2021 Shipeng Yan, Jiale Zhou, Jiangwei Xie, Songyang Zhang, Xuming He

Incremental learning of semantic segmentation has emerged as a promising strategy for visual scene interpretation in the open- world setting.

Incremental Learning Missing Labels +2

How Well Does Self-Supervised Pre-Training Perform with Streaming Data?

no code implementations ICLR 2022 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Representation Learning Self-Supervised Learning

Dynamic Context Correspondence Network for Semantic Alignment

1 code implementation ICCV 2019 Shuaiyi Huang, Qiuyue Wang, Songyang Zhang, Shipeng Yan, Xuming He

We instantiate our strategy by designing an end-to-end learnable deep network, named as Dynamic Context Correspondence Network (DCCNet).

Semantic correspondence Weakly-supervised Learning

LatentGNN: Learning Efficient Non-local Relations for Visual Recognition

1 code implementation28 May 2019 Songyang Zhang, Shipeng Yan, Xuming He

A promising strategy is to model the feature context by a fully-connected graph neural network (GNN), which augments traditional convolutional features with an estimated non-local context representation.

Cannot find the paper you are looking for? You can Submit a new open access paper.