no code implementations • 17 Mar 2024 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Longtao Huang, Hui Xue, Wei zhang
KEPLMs are pre-trained models that utilize external knowledge to enhance language understanding.
no code implementations • 19 Feb 2024 • Junbing Yan, Chengyu Wang, Jun Huang, Wei zhang
Over the past few years, the abilities of large language models (LLMs) have received extensive attention, which have performed exceptionally well in complicated scenarios such as logical reasoning and symbolic inference.
no code implementations • 22 Nov 2023 • Chengyu Wang, Junbing Yan, Wei zhang, Jun Huang
This paper delves into the pressing need in Parameter-Efficient Fine-Tuning (PEFT) for Large Language Models (LLMs).
no code implementations • 12 Nov 2023 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang
Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.
no code implementations • 20 Sep 2023 • Yukang Xie, Chengyu Wang, Junbing Yan, Jiyong Zhou, Feiqi Deng, Jun Huang
Recently, Large Language Models (LLMs) have achieved amazing zero-shot learning performance over a variety of Natural Language Processing (NLP) tasks, especially for text generative tasks.