Search Results for author: Xuansheng Wu

Found 10 papers, 6 papers with code

Retrieval-Enhanced Knowledge Editing for Multi-Hop Question Answering in Language Models

no code implementations28 Mar 2024 Yucheng Shi, Qiaoyu Tan, Xuansheng Wu, Shaochen Zhong, Kaixiong Zhou, Ninghao Liu

Large Language Models (LLMs) have shown proficiency in question-answering tasks but often struggle to integrate real-time knowledge updates, leading to potentially outdated or inaccurate responses.

Hallucination In-Context Learning +5

Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era

1 code implementation13 Mar 2024 Xuansheng Wu, Haiyan Zhao, Yaochen Zhu, Yucheng Shi, Fan Yang, Tianming Liu, Xiaoming Zhai, Wenlin Yao, Jundong Li, Mengnan Du, Ninghao Liu

Therefore, in this paper, we introduce Usable XAI in the context of LLMs by analyzing (1) how XAI can benefit LLMs and AI systems, and (2) how LLMs can contribute to the advancement of XAI.

InFoBench: Evaluating Instruction Following Ability in Large Language Models

1 code implementation7 Jan 2024 Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, PengFei Liu, Dong Yu

This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions.

Instruction Following

Applying Large Language Models and Chain-of-Thought for Automatic Scoring

no code implementations30 Nov 2023 Gyeong-Geon Lee, Ehsan Latif, Xuansheng Wu, Ninghao Liu, Xiaoming Zhai

We found a more balanced accuracy across different proficiency categories when CoT was used with a scoring rubric, highlighting the importance of domain-specific reasoning in enhancing the effectiveness of LLMs in scoring tasks.

Few-Shot Learning Prompt Engineering +1

Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations

1 code implementation29 Jun 2023 Xuansheng Wu, Huachi Zhou, Yucheng Shi, Wenlin Yao, Xiao Huang, Ninghao Liu

To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only $17\%$ of the inference time.

In-Context Learning Language Modelling +2

AGI: Artificial General Intelligence for Education

no code implementations24 Apr 2023 Ehsan Latif, Gengchen Mai, Matthew Nyaaba, Xuansheng Wu, Ninghao Liu, Guoyu Lu, Sheng Li, Tianming Liu, Xiaoming Zhai

AGI, driven by the recent large pre-trained models, represents a significant leap in the capability of machines to perform tasks that require human-level intelligence, such as reasoning, problem-solving, decision-making, and even understanding human emotions and social interactions.

Decision Making Fairness

A Survey of Graph Prompting Methods: Techniques, Applications, and Challenges

no code implementations13 Mar 2023 Xuansheng Wu, Kaixiong Zhou, Mingchen Sun, Xin Wang, Ninghao Liu

In particular, we introduce the basic concepts of graph prompt learning, organize the existing work of designing graph prompting functions, and describe their applications and future challenges.

NoPPA: Non-Parametric Pairwise Attention Random Walk Model for Sentence Representation

1 code implementation24 Feb 2023 Xuansheng Wu, Zhiyi Zhao, Ninghao Liu

We propose a novel non-parametric/un-trainable language model, named Non-Parametric Pairwise Attention Random Walk Model (NoPPA), to generate sentence embedding only with pre-trained word embedding and pre-counted word frequency.

Language Modelling Sentence +2

Matching Exemplar as Next Sentence Prediction (MeNSP): Zero-shot Prompt Learning for Automatic Scoring in Science Education

1 code implementation20 Jan 2023 Xuansheng Wu, Xinyu He, Tianming Liu, Ninghao Liu, Xiaoming Zhai

Developing models to automatically score students' written responses to science problems is critical for science education.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.