Search Results for author: Qingxiu Dong

Found 23 papers, 12 papers with code

Towards Optimal Learning of Language Models

no code implementations27 Feb 2024 Yuxian Gu, Li Dong, Yaru Hao, Qingxiu Dong, Minlie Huang, Furu Wei

This work studies the general principles of improving the learning of language models (LMs), which aims at reducing the necessary training steps for achieving superior performance.

Data Compression Language Modelling

PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization

no code implementations25 Feb 2024 Xiangdi Meng, Damai Dai, Weiyao Luo, Zhe Yang, Shaoxiang Wu, Xiaochen Wang, Peiyi Wang, Qingxiu Dong, Liang Chen, Zhifang Sui

Although LoRA fine-tuning is effective, there is still a performance gap compared to full fine-tuning, since its weight update is limited to low-rank matrices.

Can Large Multimodal Models Uncover Deep Semantics Behind Images?

no code implementations17 Feb 2024 Yixin Yang, Zheng Li, Qingxiu Dong, Heming Xia, Zhifang Sui

Understanding the deep semantics of images is essential in the era dominated by social media.

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

1 code implementation15 Jan 2024 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.

Language Modelling Large Language Model

Large Language Model for Science: A Study on P vs. NP

1 code implementation11 Sep 2023 Qingxiu Dong, Li Dong, Ke Xu, Guangyan Zhou, Yaru Hao, Zhifang Sui, Furu Wei

In this work, we use large language models (LLMs) to augment and accelerate research on the P versus NP problem, one of the most important open problems in theoretical computer science and mathematics.

Language Modelling Large Language Model

Extrapolating Large Language Models to Non-English by Aligning Languages

2 code implementations9 Aug 2023 Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI

We start from targeting individual languages by performing cross-lingual instruction-tuning (CoIT) on LLaMA, i. e. tuning it with translation task data and cross-lingual general task data to obtain cross-lingual models (x-LLaMAs), and formulate underlying scaling laws to investigate the advantages of using scalable translation data.

Translation

ImageNetVC: Zero- and Few-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories

1 code implementation24 May 2023 Heming Xia, Qingxiu Dong, Lei LI, Jingjing Xu, Tianyu Liu, Ziwei Qin, Zhifang Sui

Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge.

Common Sense Reasoning

Can Language Models Understand Physical Concepts?

1 code implementation23 May 2023 Lei LI, Jingjing Xu, Qingxiu Dong, Ce Zheng, Qi Liu, Lingpeng Kong, Xu sun

Language models~(LMs) gradually become general-purpose interfaces in the interactive and embodied world, where the understanding of physical concepts is an essential prerequisite.

Can We Edit Factual Knowledge by In-Context Learning?

2 code implementations22 May 2023 Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang

Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.

In-Context Learning knowledge editing

A Challenging Benchmark for Low-Resource Learning

1 code implementation7 Mar 2023 Yudong Wang, Chang Ma, Qingxiu Dong, Lingpeng Kong, Jingjing Xu

Experiments on a wide range of models show that neural networks, even pre-trained language models, have sharp performance drops on our benchmark, demonstrating the effectiveness on evaluating the weaknesses of neural networks.

A Survey on In-context Learning

1 code implementation31 Dec 2022 Qingxiu Dong, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu sun, Jingjing Xu, Lei LI, Zhifang Sui

With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples.

In-Context Learning

Go-tuning: Improving Zero-shot Learning Abilities of Smaller Language Models

no code implementations20 Dec 2022 Jingjing Xu, Qingxiu Dong, Hongyi Liu, Lei LI

With increasing scale, large language models demonstrate both quantitative improvement and new qualitative capabilities, especially as zero-shot learners, like GPT-3.

Language Modelling Masked Language Modeling +2

Statistical Dataset Evaluation: Reliability, Difficulty, and Validity

no code implementations19 Dec 2022 Chengwen Wang, Qingxiu Dong, Xiaochen Wang, Haitao Wang, Zhifang Sui

Taking the Named Entity Recognition (NER) datasets as a case study, we introduce $9$ statistical metrics for a statistical dataset evaluation framework.

named-entity-recognition Named Entity Recognition +1

Neural Knowledge Bank for Pretrained Transformers

no code implementations31 Jul 2022 Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui

The ability of pretrained Transformers to remember factual knowledge is essential but still limited for existing models.

Language Modelling Machine Translation +2

Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances

1 code implementation2 May 2022 Shoujie Tong, Qingxiu Dong, Damai Dai, YiFan Song, Tianyu Liu, Baobao Chang, Zhifang Sui

For each instance in a batch, we involve other instances in the same batch to interact with it.

Premise-based Multimodal Reasoning: Conditional Inference on Joint Textual and Visual Clues

no code implementations ACL 2022 Qingxiu Dong, Ziwei Qin, Heming Xia, Tian Feng, Shoujie Tong, Haoran Meng, Lin Xu, Weidong Zhan, Sujian Li, Zhongyu Wei, Tianyu Liu, Zuifang Sui

It is a common practice for recent works in vision language cross-modal reasoning to adopt a binary or multi-choice classification formulation taking as input a set of source image(s) and textual query.

Multimodal Reasoning Natural Language Inference +1

Problems and Countermeasures in Natural Language Processing Evaluation

no code implementations20 Apr 2021 Qingxiu Dong, Zhifang Sui, Weidong Zhan, Baobao Chang

Starting from the concept, com-position, development and meaning of natural language evaluation, this article classifies and summarizes the tasks and char-acteristics of mainstream natural language evaluation, and then summarizes the problems and causes of natural language pro-cessing evaluation.

Position

ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation

1 code implementation EACL 2021 Qingxiu Dong, Xiaojun Wan, Yue Cao

We propose ParaSCI, the first large-scale paraphrase dataset in the scientific field, including 33, 981 paraphrase pairs from ACL (ParaSCI-ACL) and 316, 063 pairs from arXiv (ParaSCI-arXiv).

Paraphrase Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.