no code implementations • 30 Mar 2024 • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng
The rapid development of large language models (LLMs) enables them to convey factual knowledge in a more human-like fashion.
no code implementations • 11 Feb 2024 • Yuyao Ge, Shenghua Liu, Wenjie Feng, Lingrui Mei, Lizhe Chen, Xueqi Cheng
In this work, we reveal the impact of the order of graph description on LLMs' graph reasoning performance, which significantly affects LLMs' reasoning abilities.
no code implementations • 24 Jan 2024 • Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng
This work focuses on the link prediction task and introduces $\textbf{LPNL}$ (Link Prediction via Natural Language), a framework based on large language models designed for scalable link prediction on large-scale heterogeneous graphs.
1 code implementation • 23 Jan 2024 • Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Xueqi Cheng
The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of large language models (LLMs).
no code implementations • 16 Jul 2020 • Xuming Ran, Mingkun Xu, Lingrui Mei, Qi Xu, Quanying Liu
To address this problem, a reliable uncertainty estimation is considered to be critical for in-depth understanding of OOD inputs.