Search Results for author: Songming Zhang

Found 6 papers, 4 papers with code

Mixture Data for Training Cannot Ensure Out-of-distribution Generalization

no code implementations25 Dec 2023 Songming Zhang, Yuxiao Luo, Qizhou Wang, Haoang Chi, Xiaofeng Chen, Bo Han, Jinyan Li

Deep neural networks often face generalization problems to handle out-of-distribution (OOD) data, and there remains a notable theoretical gap between the contributing factors and their respective impacts.

Data Augmentation Out-of-Distribution Generalization

Revisiting Knowledge Distillation under Distribution Shift

1 code implementation25 Dec 2023 Songming Zhang, Ziyu Lyu, Xiaofeng Chen

Knowledge distillation transfers knowledge from large models into small models, and has recently made remarkable achievements.

Data Augmentation Knowledge Distillation

A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation

1 code implementation20 Oct 2023 Xue Zhang, Songming Zhang, Yunlong Liang, Yufeng Chen, Jian Liu, Wenjuan Han, Jinan Xu

Furthermore, for situations requiring multiple paraphrases for each source sentence, we design a Diverse Templates Search (DTS) algorithm, which can enhance the diversity between paraphrases without sacrificing quality.

Data Augmentation Paraphrase Generation +2

Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation

1 code implementation14 May 2023 Songming Zhang, Yunlong Liang, Shuaibo Wang, Wenjuan Han, Jian Liu, Jinan Xu, Yufeng Chen

In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD.

Knowledge Distillation Machine Translation +2

Conditional Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation

1 code implementation ACL 2022 Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, Jie zhou

Token-level adaptive training approaches can alleviate the token imbalance problem and thus improve neural machine translation, through re-weighting the losses of different target tokens based on specific statistical metrics (e. g., token frequency or mutual information).

Language Modelling Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.