Search Results for author: Boxi Cao

Found 14 papers, 9 papers with code

URL: Universal Referential Knowledge Linking via Task-instructed Representation Compression

no code implementations24 Apr 2024 Zhuoqun Li, Hongyu Lin, Tianshu Wang, Boxi Cao, Yaojie Lu, Weixiang Zhou, Hao Wang, Zhenyu Zeng, Le Sun, Xianpei Han

Linking a claim to grounded references is a critical ability to fulfill human demands for authentic and reliable information.

Towards Universal Dense Blocking for Entity Resolution

2 code implementations23 Apr 2024 Tianshu Wang, Hongyu Lin, Xianpei Han, Xiaoyang Chen, Boxi Cao, Le Sun

Blocking is a critical step in entity resolution, and the emergence of neural network-based representation models has led to the development of dense blocking as a promising approach for exploring deep semantics in blocking.

Blocking Contrastive Learning

Spiral of Silences: How is Large Language Model Killing Information Retrieval? -- A Case Study on Open Domain Question Answering

1 code implementation16 Apr 2024 Xiaoyang Chen, Ben He, Hongyu Lin, Xianpei Han, Tianshu Wang, Boxi Cao, Le Sun, Yingfei Sun

The practice of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with retrieval systems, has become increasingly prevalent.

Information Retrieval Language Modelling +3

Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation

2 code implementations10 Apr 2024 Ruotong Pan, Boxi Cao, Hongyu Lin, Xianpei Han, Jia Zheng, Sirui Wang, Xunliang Cai, Le Sun

In this paper, we propose Credibility-aware Generation (CAG), a universally applicable framework designed to mitigate the impact of flawed information in RAG.

Retrieval

ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases

1 code implementation8 Jun 2023 Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun

Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models.

Learning In-context Learning for Named Entity Recognition

2 code implementations18 May 2023 Jiawei Chen, Yaojie Lu, Hongyu Lin, Jie Lou, Wei Jia, Dai Dai, Hua Wu, Boxi Cao, Xianpei Han, Le Sun

M}$, and a new entity extractor can be implicitly constructed by applying new instruction and demonstrations to PLMs, i. e., $\mathcal{ (\lambda .

few-shot-ner Few-shot NER +4

Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism of Language Models

no code implementations16 May 2023 Boxi Cao, Qiaoyu Tang, Hongyu Lin, Shanshan Jiang, Bin Dong, Xianpei Han, Jiawei Chen, Tianshu Wang, Le Sun

Memory is one of the most essential cognitive functions serving as a repository of world knowledge and episodes of activities.

World Knowledge

The Life Cycle of Knowledge in Big Language Models: A Survey

1 code implementation14 Mar 2023 Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun

Knowledge plays a critical role in artificial intelligence.

Pre-training to Match for Unified Low-shot Relation Extraction

1 code implementation ACL 2022 Fangchao Liu, Hongyu Lin, Xianpei Han, Boxi Cao, Le Sun

Low-shot relation extraction~(RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application.

Meta-Learning Relation +1

Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View

1 code implementation ACL 2022 Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, Le Sun

Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs).

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

1 code implementation ACL 2021 Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu

Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.

Cannot find the paper you are looking for? You can Submit a new open access paper.