Search Results for author: Keping Bi

Found 24 papers, 10 papers with code

MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning

no code implementations21 Feb 2024 Wanqing Cui, Keping Bi, Jiafeng Guo, Xueqi Cheng

Since commonsense information has been recorded significantly less frequently than its existence, language models pre-trained by text generation have difficulty to learn sufficient commonsense knowledge.

Retrieval Text Generation +1

When Do LLMs Need Retrieval Augmentation? Mitigating LLMs' Overconfidence Helps Retrieval Augmentation

1 code implementation18 Feb 2024 Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng

This motivates us to enhance the LLMs' ability to perceive their knowledge boundaries to help RA.

Retrieval

Reproducibility Analysis and Enhancements for Multi-Aspect Dense Retriever with Aspect Learning

1 code implementation8 Jan 2024 Keping Bi, Xiaojie Sun, Jiafeng Guo, Xueqi Cheng

MADRAL was evaluated on proprietary data and its code was not released, making it challenging to validate its effectiveness on other datasets.

Retrieval

A Multi-Granularity-Aware Aspect Learning Model for Multi-Aspect Dense Retrieval

1 code implementation5 Dec 2023 Xiaojie Sun, Keping Bi, Jiafeng Guo, Sihui Yang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Xueqi Cheng

Dense retrieval methods have been mostly focused on unstructured text and less attention has been drawn to structured data with various aspects, e. g., products with aspects such as category and brand.

Language Modelling Retrieval +1

CAME: Competitively Learning a Mixture-of-Experts Model for First-stage Retrieval

no code implementations6 Nov 2023 Yinqiong Cai, Yixing Fan, Keping Bi, Jiafeng Guo, Wei Chen, Ruqing Zhang, Xueqi Cheng

The first-stage retrieval aims to retrieve a subset of candidate documents from a huge collection both effectively and efficiently.

Retrieval

CIR at the NTCIR-17 ULTRE-2 Task

no code implementations18 Oct 2023 Lulu Yu, Keping Bi, Jiafeng Guo, Xueqi Cheng

The Chinese academy of sciences Information Retrieval team (CIR) has participated in the NTCIR-17 ULTRE-2 task.

Information Retrieval Position +1

A Comparative Study of Training Objectives for Clarification Facet Generation

1 code implementation1 Oct 2023 Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng

In this paper, we aim to conduct a systematic comparative study of various types of training objectives, with different properties of not only whether it is permutation-invariant but also whether it conducts sequential prediction and whether it can control the count of output facets.

Text Generation

L^2R: Lifelong Learning for First-stage Retrieval with Backward-Compatible Representations

1 code implementation22 Aug 2023 Yinqiong Cai, Keping Bi, Yixing Fan, Jiafeng Guo, Wei Chen, Xueqi Cheng

First-stage retrieval is a critical task that aims to retrieve relevant document candidates from a large-scale collection.

Retrieval

Pre-training with Aspect-Content Text Mutual Prediction for Multi-Aspect Dense Retrieval

1 code implementation22 Aug 2023 Xiaojie Sun, Keping Bi, Jiafeng Guo, Xinyu Ma, Fan Yixing, Hongyu Shan, Qishen Zhang, Zhongyi Liu

Extensive experiments on two real-world datasets (product and mini-program search) show that our approach can outperform competitive baselines both treating aspect values as classes and conducting the same MLM for aspect and content strings.

Language Modelling Masked Language Modeling +1

Ensemble Ranking Model with Multiple Pretraining Strategies for Web Search

no code implementations18 Feb 2023 Xiaojie Sun, Lulu Yu, Yiting Wang, Keping Bi, Jiafeng Guo

Then we fine-tune several pre-trained models and train an ensemble model to aggregate all the predictions from various pre-trained models with human-annotation data in the fine-tuning stage.

Learning-To-Rank

Feature-Enhanced Network with Hybrid Debiasing Strategies for Unbiased Learning to Rank

no code implementations15 Feb 2023 Lulu Yu, Yiting Wang, Xiaojie Sun, Keping Bi, Jiafeng Guo

Unbiased learning to rank (ULTR) aims to mitigate various biases existing in user clicks, such as position bias, trust bias, presentation bias, and learn an effective ranker.

Learning-To-Rank

Asking Clarifying Questions Based on Negative Feedback in Conversational Search

no code implementations12 Jul 2021 Keping Bi, Qingyao Ai, W. Bruce Croft

To quickly identify user intent and reduce effort during interactions, we propose an intent clarification task based on yes/no questions where the system needs to ask the correct question about intents within the fewest conversation turns.

Conversational Search Question Selection +1

Leveraging User Behavior History for Personalized Email Search

no code implementations15 Feb 2021 Keping Bi, Pavel Metrikov, Chunyuan Li, Byungki Byun

Given these observations, we propose to leverage user search history as query context to characterize users and build a context-aware ranking model for email search.

Learning-To-Rank

A Transformer-based Embedding Model for Personalized Product Search

no code implementations18 May 2020 Keping Bi, Qingyao Ai, W. Bruce Croft

Aware of these limitations, we propose a transformer-based embedding model (TEM) for personalized product search, which could dynamically control the influence of personalization by encoding the sequence of query and user's purchase history with a transformer architecture.

Retrieval

Learning a Fine-Grained Review-based Transformer Model for Personalized Product Search

1 code implementation20 Apr 2020 Keping Bi, Qingyao Ai, W. Bruce Croft

RTM conducts review-level matching between the user and item, where each review has a dynamic effect according to the context in the sequence.

AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization

3 code implementations EACL 2021 Keping Bi, Rahul Jha, W. Bruce Croft, Asli Celikyilmaz

Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step.

Document Summarization Extractive Document Summarization +3

Explainable Product Search with a Dynamic Relation Embedding Model

no code implementations16 Sep 2019 Qingyao Ai, Yongfeng Zhang, Keping Bi, W. Bruce Croft

Specifically, we propose to model the "search and purchase" behavior as a dynamic relation between users and items, and create a dynamic knowledge graph based on both the multi-relational product data and the context of the search session.

Relation Retrieval

A Study of Context Dependencies in Multi-page Product Search

no code implementations9 Sep 2019 Keping Bi, Choon Hui Teo, Yesh Dattatreya, Vijai Mohan, W. Bruce Croft

In this paper, we study RF techniques based on both long-term and short-term context dependencies in multi-page product search.

Retrieval

Conversational Product Search Based on Negative Feedback

no code implementations4 Sep 2019 Keping Bi, Qingyao Ai, Yongfeng Zhang, W. Bruce Croft

So in this paper, we propose a conversational paradigm for product search driven by non-relevant items, based on which fine-grained feedback is collected and utilized to show better results in the next iteration.

Conversational Search

Leverage Implicit Feedback for Context-aware Product Search

no code implementations4 Sep 2019 Keping Bi, Choon Hui Teo, Yesh Dattatreya, Vijai Mohan, W. Bruce Croft

However, customers with little or no purchase history do not benefit from personalized product search.

Re-Ranking

Revisiting Iterative Relevance Feedback for Document and Passage Retrieval

no code implementations13 Dec 2018 Keping Bi, Qingyao Ai, W. Bruce Croft

We conduct extensive experiments to analyze and compare IRF with the standard top-k RF framework on document and passage retrieval.

Passage Retrieval Retrieval

Learning a Deep Listwise Context Model for Ranking Refinement

1 code implementation16 Apr 2018 Qingyao Ai, Keping Bi, Jiafeng Guo, W. Bruce Croft

Specifically, we employ a recurrent neural network to sequentially encode the top results using their feature vectors, learn a local context model and use it to re-rank the top results.

Information Retrieval Learning-To-Rank +1

Unbiased Learning to Rank with Unbiased Propensity Estimation

1 code implementation16 Apr 2018 Qingyao Ai, Keping Bi, Cheng Luo, Jiafeng Guo, W. Bruce Croft

We find that the problem of estimating a propensity model from click data is a dual problem of unbiased learning to rank.

Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.