Search Results for author: Daize Dong

Found 5 papers, 3 papers with code

iDAT: inverse Distillation Adapter-Tuning

1 code implementation23 Mar 2024 Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Daize Dong, Suncheng Xiang, Ting Liu, Yuzhuo Fu

Adapter-Tuning (AT) method involves freezing a pre-trained model and introducing trainable adapter modules to acquire downstream knowledge, thereby calibrating the model for better adaptation to downstream tasks.

Image Classification Knowledge Distillation

A Graph is Worth $K$ Words: Euclideanizing Graph using Pure Transformer

no code implementations4 Feb 2024 Zhangyang Gao, Daize Dong, Cheng Tan, Jun Xia, Bozhen Hu, Stan Z. Li

Despite recent GNN and Graphformer efforts encoding graphs as Euclidean vectors, recovering original graph from the vectors remains a challenge.

Graph Classification Graph Generation +1

PAD-Net: An Efficient Framework for Dynamic Networks

1 code implementation10 Nov 2022 Shwai He, Liang Ding, Daize Dong, Boan Liu, Fuqiang Yu, DaCheng Tao

The main contributions of our work are challenging the basic commonsense in dynamic networks and proposing a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones.

Image Classification

SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters

1 code implementation9 Oct 2022 Shwai He, Liang Ding, Daize Dong, Miao Zhang, DaCheng Tao

Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning.

Network Pruning

SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution

no code implementations5 Apr 2022 Shwai He, Chenbo Jiang, Daize Dong, Liang Ding

Dynamic convolution achieves better performance for efficient CNNs at the cost of negligible FLOPs increase.

Cannot find the paper you are looking for? You can Submit a new open access paper.