Search Results for author: Dongshuo Yin

Found 4 papers, 1 papers with code

Adapter is All You Need for Tuning Visual Tasks

1 code implementation25 Nov 2023 Dongshuo Yin, Leiyi Hu, Bin Li, Youqun Zhang

To fully demonstrate the practicality and generality of Mona, we conduct experiments on multiple representative visual tasks, including instance segmentation on COCO, semantic segmentation on ADE20K, object detection on Pascal VOC, and image classification on several common datasets.

Image Classification Instance Segmentation +4

Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions

no code implementations16 Jun 2023 Dongshuo Yin, Xueting Han, Bin Li, Hao Feng, Jing Bai

We provide a gradient backpropagation highway for low-rank adapters which eliminates the need for expensive backpropagation through the frozen pre-trained model, resulting in substantial savings of training memory and training time.

Transfer Learning

1% VS 100%: Parameter-Efficient Low Rank Adapter for Dense Predictions

no code implementations CVPR 2023 Dongshuo Yin, Yiran Yang, Zhechao Wang, Hongfeng Yu, Kaiwen Wei, Xian Sun

Fine-tuning large-scale pre-trained vision models to downstream tasks is a standard technique for achieving state-of-the-art performance on computer vision benchmarks.

Instance Segmentation object-detection +3

Beyond the Limitation of Monocular 3D Detector via Knowledge Distillation

no code implementations ICCV 2023 Yiran Yang, Dongshuo Yin, Xuee Rong, Xian Sun, Wenhui Diao, Xinming Li

Moreover, we construct a depth-guided matrix by the predicted depth gap of teacher and student to facilitate the model to learn more knowledge of farther objects in prediction level distillation.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.