Search Results for author: Yiping Deng

Found 8 papers, 6 papers with code

CLOSE: Curriculum Learning On the Sharing Extent Towards Better One-shot NAS

1 code implementation16 Jul 2022 Zixuan Zhou, Xuefei Ning, Yi Cai, Jiashu Han, Yiping Deng, Yuhan Dong, Huazhong Yang, Yu Wang

Specifically, we train the supernet with a large sharing extent (an easier curriculum) at the beginning and gradually decrease the sharing extent of the supernet (a harder curriculum).

Neural Architecture Search

Positive and Unlabeled Federated Learning

no code implementations29 Sep 2021 Lin Xinyang, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang

We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.

Federated Learning

Greedy Network Enlarging

1 code implementation31 Jul 2021 Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang

Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.

Augmented Shortcuts for Vision Transformers

4 code implementations NeurIPS 2021 Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang

Transformer models have achieved great progress on computer vision tasks recently.

Federated Learning with Positive and Unlabeled Data

1 code implementation21 Jun 2021 Xinyang Lin, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang

We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.

Federated Learning

Data-Free Knowledge Distillation for Image Super-Resolution

no code implementations CVPR 2021 Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang

Experiments on various datasets and architectures demonstrate that the proposed method is able to be utilized for effectively learning portable student networks without the original data, e. g., with 0. 16dB PSNR drop on Set5 for x2 super resolution.

Data-free Knowledge Distillation Image Super-Resolution +1

Manifold Regularized Dynamic Network Pruning

7 code implementations CVPR 2021 Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, DaCheng Tao, Chang Xu

Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure.

Network Pruning

Pre-Trained Image Processing Transformer

6 code implementations CVPR 2021 Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

 Ranked #1 on Single Image Deraining on Rain100L (using extra training data)

Color Image Denoising Contrastive Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.