Search Results for author: Junbing Yan

Found 5 papers, 0 papers with code

Do Large Language Models Understand Logic or Just Mimick Context?

no code implementations19 Feb 2024 Junbing Yan, Chengyu Wang, Jun Huang, Wei zhang

Over the past few years, the abilities of large language models (LLMs) have received extensive attention, which have performed exceptionally well in complicated scenarios such as logical reasoning and symbolic inference.

counterfactual In-Context Learning +1

Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper

no code implementations22 Nov 2023 Chengyu Wang, Junbing Yan, Wei zhang, Jun Huang

This paper delves into the pressing need in Parameter-Efficient Fine-Tuning (PEFT) for Large Language Models (LLMs).

Model Compression Position

From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models

no code implementations12 Nov 2023 Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang

Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.

Language Modelling Logical Reasoning

Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters

no code implementations20 Sep 2023 Yukang Xie, Chengyu Wang, Junbing Yan, Jiyong Zhou, Feiqi Deng, Jun Huang

Recently, Large Language Models (LLMs) have achieved amazing zero-shot learning performance over a variety of Natural Language Processing (NLP) tasks, especially for text generative tasks.

Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.