Search Results for author: Ji-Rong Wen

Found 205 papers, 125 papers with code

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

no code implementations ACL 2022 Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan

Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.

Chatbot

Finding the Dominant Winning Ticket in Pre-Trained Language Models

no code implementations Findings (ACL) 2022 Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, Rui Yan

Empirically, we show that (a) the dominant winning ticket can achieve performance that is comparable with that of the full-parameter model, (b) the dominant winning ticket is transferable across different tasks, (c) and the dominant winning ticket has a natural structure within each parameter matrix.

Semantic Sentence Matching via Interacting Syntax Graphs

no code implementations COLING 2022 Chen Xu, Jun Xu, Zhenhua Dong, Ji-Rong Wen

In this paper, we formalize the task of semantic sentence matching as a problem of graph matching in which each sentence is represented as a directed graph according to its syntactic structures.

Graph Matching Sentence

Optimal Partial Transport Based Sentence Selection for Long-form Document Matching

1 code implementation COLING 2022 Weijie Yu, Liang Pang, Jun Xu, Bing Su, Zhenhua Dong, Ji-Rong Wen

Enjoying the partial transport properties of OPT, the selected key sentences can not only effectively enhance the matching accuracy, but also be explained as the rationales for the matching results.

Sentence

EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention

1 code implementation26 Mar 2024 Zhen Tian, Wayne Xin Zhao, Changwang Zhang, Xin Zhao, Zhongrui Ma, Ji-Rong Wen

The core of transformer architecture lies in the self-attention mechanism, which computes the pairwise attention scores in a sequence.

Contrastive Learning

ChainLM: Empowering Large Language Models with Improved Chain-of-Thought Prompting

1 code implementation21 Mar 2024 Xiaoxue Cheng, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

In response to this challenge, we present an empirical investigation of CoT prompting and introduce CoTGenius, a novel framework designed for the automatic generation of superior CoT prompts.

An Analysis on Matching Mechanisms and Token Pruning for Late-interaction Models

no code implementations20 Mar 2024 Qi Liu, Gang Guo, Jiaxin Mao, Zhicheng Dou, Ji-Rong Wen, Hao Jiang, Xinyu Zhang, Zhao Cao

Based on these findings, we then propose several simple document pruning methods to reduce the storage overhead and compare the effectiveness of different pruning methods on different late-interaction models.

Retrieval

A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation

no code implementations20 Mar 2024 Bowen Zheng, Zihan Lin, Enze Liu, Chen Yang, Enyang Bai, Cheng Ling, Wayne Xin Zhao, Ji-Rong Wen

Meanwhile, we leverage the LLM recommender as a supplemental component (discarded in deployment) to better capture underlying user preferences from heterogeneous interaction behaviors.

Language Modelling Large Language Model +1

Less is More: Data Value Estimation for Visual Instruction Tuning

no code implementations14 Mar 2024 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

To investigate this issue, we conduct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of several instruction dataset even do not affect the performance.

StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses

no code implementations13 Mar 2024 Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan

Accordingly, we introduce StreamingDialogue, which compresses long dialogue history into conv-attn sinks with minimal losses, and thus reduces computational complexity quadratically with the number of sinks (i. e., the number of utterances).

Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models

1 code implementation4 Mar 2024 Changyu Chen, Xiting Wang, Ting-En Lin, Ang Lv, Yuchuan Wu, Xin Gao, Ji-Rong Wen, Rui Yan, Yongbin Li

In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models in such domains.

Data Augmentation GSM8K +2

Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers

1 code implementation27 Feb 2024 Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Siyuan Lu, Yaliang Li, Ji-Rong Wen

Focused on the two aspects, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies for LLM-based prompt optimizers.

REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering

1 code implementation27 Feb 2024 Yuhao Wang, Ruiyang Ren, Junyi Li, Wayne Xin Zhao, Jing Liu, Ji-Rong Wen

By combining the improvements in both architecture and training, our proposed REAR can better utilize external knowledge by effectively perceiving the relevance of retrieved documents.

Open-Domain Question Answering Retrieval

BASES: Large-scale Web Search User Simulation with Large Language Model based Agents

no code implementations27 Feb 2024 Ruiyang Ren, Peng Qiu, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Hua Wu, Ji-Rong Wen, Haifeng Wang

Due to the excellent capacities of large language models (LLMs), it becomes feasible to develop LLM-based agents for reliable user simulation.

Information Retrieval Language Modelling +3

Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models

no code implementations26 Feb 2024 Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, Ji-Rong Wen

Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.

UFO: a Unified and Flexible Framework for Evaluating Factuality of Large Language Models

1 code implementation22 Feb 2024 Zhaoheng Huang, Zhicheng Dou, Yutao Zhu, Ji-Rong Wen

To address these challenges, we categorize four available fact sources: human-written evidence, reference documents, search engine results, and LLM knowledge, along with five text generation tasks containing six representative datasets.

Hallucination Retrieval +1

Large Language Model-based Human-Agent Collaboration for Complex Task Solving

1 code implementation20 Feb 2024 Xueyang Feng, Zhi-Yuan Chen, Yujia Qin, Yankai Lin, Xu Chen, Zhiyuan Liu, Ji-Rong Wen

We construct a human-agent collaboration dataset to train this policy model in an offline reinforcement learning environment.

Language Modelling Large Language Model +1

Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs

1 code implementation19 Feb 2024 Jiejun Tan, Zhicheng Dou, Yutao Zhu, Peidong Guo, Kun Fang, Ji-Rong Wen

The integration of large language models (LLMs) and search engines represents a significant evolution in knowledge acquisition methodologies.

Question Answering

KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning over Knowledge Graph

no code implementations17 Feb 2024 Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yang song, Chen Zhu, HengShu Zhu, Ji-Rong Wen

To guarantee the effectiveness, we leverage program language to formulate the multi-hop reasoning process over the KG, and synthesize a code-based instruction dataset to fine-tune the base LLM.

Knowledge Graphs

INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning

1 code implementation12 Jan 2024 Yutao Zhu, Peitian Zhang, Chenghao Zhang, Yifei Chen, Binyu Xie, Zhicheng Dou, Zheng Liu, Ji-Rong Wen

Despite this, their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language.

document understanding Information Retrieval +2

Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint

1 code implementation11 Jan 2024 Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, Ji-Rong Wen

To address it, we propose a new RL method named \textbf{RLMEC} that incorporates a generative model as the reward model, which is trained by the erroneous solution rewriting task under the minimum editing constraint, and can produce token-level rewards for RL training.

Question Answering Reinforcement Learning (RL)

Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis

no code implementations10 Jan 2024 Lanling Xu, Junjie Zhang, Bingqian Li, Jinpeng Wang, Mingchen Cai, Wayne Xin Zhao, Ji-Rong Wen

As for the use of LLMs as recommenders, we analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results based on the classification of LLMs.

Prompt Engineering Recommendation Systems

The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models

1 code implementation6 Jan 2024 Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

To tackle the LLM hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation).

Hallucination

ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph

no code implementations30 Dec 2023 Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen

To better perform reasoning on KG, recent work typically adopts a pre-trained language model~(PLM) to model the question, and a graph neural network~(GNN) based module to perform multi-hop reasoning on the KG.

Language Modelling Question Answering

Scaling Law of Large Sequential Recommendation Models

no code implementations19 Nov 2023 Gaowei Zhang, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ji-Rong Wen

We find that scaling up the model size can greatly boost the performance on these challenging tasks, which again verifies the benefits of large recommendation models.

Sequential Recommendation

Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation

1 code implementation15 Nov 2023 Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, Ji-Rong Wen

To address this challenge, in this paper, we propose a new LLM-based recommendation model called LC-Rec, which can better integrate language and collaborative semantics for recommender systems.

Quantization Recommendation Systems

Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse

1 code implementation13 Nov 2023 Ang Lv, Kaiyi Zhang, Shufang Xie, Quan Tu, Yuhan Chen, Ji-Rong Wen, Rui Yan

Recent studies have highlighted a phenomenon in large language models (LLMs) known as "the reversal curse," in which the order of knowledge entities in the training data biases the models' comprehension.

Denoising Language Modelling

AI-accelerated Discovery of Altermagnetic Materials

1 code implementation8 Nov 2023 Ze-Feng Gao, Shuai Qu, Bocheng Zeng, Yang Liu, Ji-Rong Wen, Hao Sun, Peng-Jie Guo, Zhong-Yi Lu

Altermagnetism, a new magnetic phase, has been theoretically proposed and experimentally verified to be distinct from ferromagnetism and antiferromagnetism.

Don't Make Your LLM an Evaluation Benchmark Cheater

no code implementations3 Nov 2023 Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han

Large language models~(LLMs) have greatly advanced the frontiers of artificial intelligence, attaining remarkable improvement in model capacity.

What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning

1 code implementation2 Nov 2023 Yifan Du, Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Jinpeng Wang, Chuyuan Wang, Mingchen Cai, Ruihua Song, Ji-Rong Wen

By conducting a comprehensive empirical study, we find that instructions focused on complex visual reasoning tasks are particularly effective in improving the performance of MLLMs on evaluation benchmarks.

Visual Reasoning Zero-shot Generalization

From Indeterminacy to Determinacy: Augmenting Logical Reasoning Capabilities with Large Language Models

1 code implementation28 Oct 2023 Hongda Sun, Weikai Xu, Wei Liu, Jian Luan, Bin Wang, Shuo Shang, Ji-Rong Wen, Rui Yan

To address these challenges, we propose DetermLR, a novel reasoning framework that formulates the reasoning process as a transformational journey from indeterminate premises to determinate ones.

Logical Reasoning

AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems

no code implementations13 Oct 2023 Junjie Zhang, Yupeng Hou, Ruobing Xie, Wenqi Sun, Julian McAuley, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

The optimized agents can also propagate their preferences to other agents in subsequent interactions, implicitly capturing the collaborative filtering idea.

Collaborative Filtering Decision Making +2

BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models

1 code implementation23 Sep 2023 Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Recently, multiple studies have committed to extending the context length and enhancing the long text modeling capabilities of LLMs.

Code Completion Hallucination +2

Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection

no code implementations30 Aug 2023 Hongjin Qian, Zhicheng Dou, Jiejun Tan, Haonan Chen, Haoqi Gu, Ruofei Lai, Xinyu Zhang, Zhao Cao, Ji-Rong Wen

Previous methods use external knowledge as references for text generation to enhance factuality but often struggle with the knowledge mix-up(e. g., entity mismatch) of irrelevant references.

Text Generation

A Survey on Large Language Model based Autonomous Agents

2 code implementations22 Aug 2023 Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, ZhiYuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen

In this paper, we present a comprehensive survey of these studies, delivering a systematic review of the field of LLM-based autonomous agents from a holistic perspective.

Language Modelling Large Language Model

Uncovering User Interest from Biased and Noised Watch Time in Video Recommendation

1 code implementation16 Aug 2023 Haiyuan Zhao, Lei Zhang, Jun Xu, Guohao Cai, Zhenhua Dong, Ji-Rong Wen

In the video recommendation, watch time is commonly adopted as an indicator of user interest.

Large Language Models for Information Retrieval: A Survey

1 code implementation14 Aug 2023 Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen

This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity).

Information Retrieval Question Answering +2

LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops

1 code implementation11 Aug 2023 Chen Xu, Xiaopeng Ye, Jun Xu, Xiao Zhang, Weiran Shen, Ji-Rong Wen

RFL means that recommender system can only receive feedback on exposed items from users and update recommender models incrementally based on this feedback.

Fairness Recommendation Systems

Counterfactual Cross-modality Reasoning for Weakly Supervised Video Moment Localization

1 code implementation10 Aug 2023 Zezhong Lv, Bing Su, Ji-Rong Wen

Finally, by suppressing the unimodal effect of masked query, we can rectify the reconstructions of video proposals to perform reasonable contrastive learning.

Contrastive Learning counterfactual

Synthesizing Long-Term Human Motions with Diffusion Models via Coherent Sampling

1 code implementation3 Aug 2023 Zhao Yang, Bing Su, Ji-Rong Wen

Firstly, they cannot directly generate coherent motions and require additional operations such as interpolation to process the generated actions.

Sentence

Spatio-Temporal Branching for Motion Prediction using Motion Increments

1 code implementation2 Aug 2023 Jiexin Wang, Yujie Zhou, Wenwen Qiang, Ying Ba, Bing Su, Ji-Rong Wen

Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications, but it remains a challenging task due to the stochastic and aperiodic nature of future poses.

Human motion prediction Knowledge Distillation +1

Alleviating the Long-Tail Problem in Conversational Recommender Systems

1 code implementation21 Jul 2023 Zhipeng Zhao, Kun Zhou, Xiaolei Wang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen

Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations.

Recommendation Systems Retrieval

Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation

1 code implementation20 Jul 2023 Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang

In this study, we present an initial analysis of the factual knowledge boundaries of LLMs and how retrieval augmentation affects LLMs on open-domain QA.

Open-Domain Question Answering Retrieval +1

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

1 code implementation16 Jul 2023 Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.

In-Context Learning Instruction Following +1

RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit

1 code implementation8 Jun 2023 Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen

To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit.

Answer Generation Fact Checking +5

Improving Conversational Recommendation Systems via Counterfactual Data Simulation

1 code implementation5 Jun 2023 Xiaolei Wang, Kun Zhou, Xinyu Tang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen

To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model.

counterfactual Data Augmentation +2

User Behavior Simulation with Large Language Model based Agents

1 code implementation5 Jun 2023 Lei Wang, Jingsen Zhang, Hao Yang, ZhiYuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, Ji-Rong Wen

Simulating high quality user behavior data has always been a fundamental problem in human-centered applications, where the major difficulty originates from the intricate mechanism of human decision process.

Language Modelling Large Language Model +2

Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning

1 code implementation NeurIPS 2023 Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, Ji-Rong Wen

Based on this finding, we propose a new approach that can deliberate the reasoning steps with tool interfaces, namely \textbf{DELI}.

Math

Zero-shot Visual Question Answering with Language Model Feedback

1 code implementation26 May 2023 Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA).

Language Modelling Question Answering +1

ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models

1 code implementation23 May 2023 Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen

Although large language models (LLMs) have achieved excellent performance in a variety of evaluation benchmarks, they still struggle in complex reasoning tasks which require specific knowledge and multi-hop reasoning.

Math

Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models

1 code implementation22 May 2023 Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen

The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs), which rely on natural language conversations to satisfy user needs.

Explanation Generation Recommendation Systems

HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

2 code implementations19 May 2023 Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i. e., content that conflicts with the source or cannot be verified by the factual knowledge.

Hallucination Hallucination Evaluation

When Search Meets Recommendation: Learning Disentangled Search Representation for Recommendation

1 code implementation18 May 2023 Zihua Si, Zhongxiang Sun, Xiao Zhang, Jun Xu, Xiaoxue Zang, Yang song, Kun Gai, Ji-Rong Wen

In our paper, we propose a Search-Enhanced framework for the Sequential Recommendation (SESRec) that leverages users' search interests for recommendation, by disentangling similar and dissimilar representations within S&R behaviors.

Contrastive Learning Disentanglement +1

TOME: A Two-stage Approach for Model-based Retrieval

no code implementations18 May 2023 Ruiyang Ren, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji-Rong Wen, Haifeng Wang

Recently, model-based retrieval has emerged as a new paradigm in text retrieval that discards the index in the traditional retrieval model and instead memorizes the candidate corpora using model parameters.

Natural Questions Retrieval +1

The Web Can Be Your Oyster for Improving Large Language Models

1 code implementation18 May 2023 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen

In order to further improve the capacity of LLMs for knowledge-intensive tasks, we consider augmenting LLMs with the large-scale web using search engine.

Retrieval World Knowledge

Evaluating Object Hallucination in Large Vision-Language Models

2 code implementations17 May 2023 YiFan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen

Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i. e. they tend to generate objects that are inconsistent with the target images in the descriptions.

Hallucination Object

StructGPT: A General Framework for Large Language Model to Reason over Structured Data

1 code implementation16 May 2023 Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, Ji-Rong Wen

Specially, we propose an \emph{invoking-linearization-generation} procedure to support LLMs in reasoning on the structured data with the help of the external interfaces.

Language Modelling Large Language Model +1

Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

no code implementations11 May 2023 Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs.

Instruction Following Language Modelling +2

Diffusion-NAT: Self-Prompting Discrete Diffusion for Non-Autoregressive Text Generation

no code implementations6 May 2023 Kun Zhou, YiFan Li, Wayne Xin Zhao, Ji-Rong Wen

To solve it, we propose Diffusion-NAT, which introduces discrete diffusion models~(DDM) into NAR text-to-text generation and integrates BART to improve the performance.

Denoising Text Generation

GlyphDiffusion: Text Generation as Image Generation

no code implementations25 Apr 2023 Junyi Li, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

In this way, conditional text generation can be cast as a glyph image generation task, and it is then natural to apply continuous diffusion models to discrete texts.

Conditional Text Generation Glyph Image Generation +2

EulerNet: Adaptive Feature Interaction Learning via Euler's Formula for CTR Prediction

1 code implementation21 Apr 2023 Zhen Tian, Ting Bai, Wayne Xin Zhao, Ji-Rong Wen, Zhao Cao

EulerNet converts the exponential powers of feature interactions into simple linear combinations of the modulus and phase of the complex features, making it possible to adaptively learn the high-order feature interactions in an efficient way.

Click-Through Rate Prediction

WebBrain: Learning to Generate Factually Correct Articles for Queries by Grounding on Large Web Corpus

1 code implementation10 Apr 2023 Hongjing Qian, Yutao Zhu, Zhicheng Dou, Haoqi Gu, Xinyu Zhang, Zheng Liu, Ruofei Lai, Zhao Cao, Jian-Yun Nie, Ji-Rong Wen

In this paper, we introduce a new NLP task -- generating short factual articles with references for queries by mining supporting evidence from the Web.

Retrieval Text Generation

A Survey of Large Language Models

4 code implementations31 Mar 2023 Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, YiFan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen

To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.

Language Modelling

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

no code implementations27 Mar 2023 Peiyu Liu, Ze-Feng Gao, Yushuo Chen, Wayne Xin Zhao, Ji-Rong Wen

Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility.

Dually Enhanced Propensity Score Estimation in Sequential Recommendation

1 code implementation15 Mar 2023 Chen Xu, Jun Xu, Xu Chen, Zhenghua Dong, Ji-Rong Wen

According to the graph, two complementary propensity scores are estimated from the views of item and user, respectively, based on the same set of user feedback data.

Sequential Recommendation

Diffusion Models for Non-autoregressive Text Generation: A Survey

1 code implementation12 Mar 2023 YiFan Li, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen

In this survey, we review the recent progress in diffusion models for NAR text generation.

Text Generation

TextBox 2.0: A Text Generation Library with Pre-trained Language Models

1 code implementation26 Dec 2022 Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Zican Dong, Xiaoxue Cheng, Yuhao Wang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2. 0, focusing on the use of pre-trained language models (PLMs).

Abstractive Text Summarization Data-to-Text Generation +7

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

1 code implementation15 Dec 2022 Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.

Passage Retrieval Retrieval

Visually-augmented pretrained language models for NLP tasks without images

1 code implementation15 Dec 2022 Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Qinyu Zhang, Ji-Rong Wen

Although pre-trained language models~(PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense.

Retrieval

UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph

1 code implementation2 Dec 2022 Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen

Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG).

Language Modelling Multi-hop Question Answering +2

CDSM: Cascaded Deep Semantic Matching on Textual Graphs Leveraging Ad-hoc Neighbor Selection

1 code implementation30 Nov 2022 Jing Yao, Zheng Liu, Junhan Yang, Zhicheng Dou, Xing Xie, Ji-Rong Wen

In the first stage, a lightweight CNN-based ad-hod neighbor selector is deployed to filter useful neighbors for the matching task with a small computation cost.

Recent Advances in RecBole: Extensions with more Practical Considerations

1 code implementation28 Nov 2022 Lanling Xu, Zhen Tian, Gaowei Zhang, Lei Wang, Junjie Zhang, Bowen Zheng, YiFan Li, Yupeng Hou, Xingyu Pan, Yushuo Chen, Wayne Xin Zhao, Xu Chen, Ji-Rong Wen

In order to show the recent update in RecBole, we write this technical report to introduce our latest improvements on RecBole.

Dense Text Retrieval based on Pretrained Language Models: A Survey

2 code implementations27 Nov 2022 Wayne Xin Zhao, Jing Liu, Ruiyang Ren, Ji-Rong Wen

With powerful PLMs, we can effectively learn the representations of queries and texts in the latent representation space, and further construct the semantic matching function between the dense vectors for relevance modeling.

Retrieval Text Retrieval

Directed Acyclic Graph Factorization Machines for CTR Prediction via Knowledge Distillation

1 code implementation21 Nov 2022 Zhen Tian, Ting Bai, Zibin Zhang, Zhiyuan Xu, Kangyi Lin, Ji-Rong Wen, Wayne Xin Zhao

Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference.

Click-Through Rate Prediction Knowledge Distillation +1

SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval

1 code implementation21 Oct 2022 Kun Zhou, Yeyun Gong, Xiao Liu, Wayne Xin Zhao, Yelong Shen, Anlei Dong, Jingwen Lu, Rangan Majumder, Ji-Rong Wen, Nan Duan, Weizhu Chen

Thus, we propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.

Retrieval Text Retrieval

Privacy-Preserved Neural Graph Similarity Learning

1 code implementation21 Oct 2022 Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen

To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years.

Graph Matching Graph Similarity +1

Law Article-Enhanced Legal Case Matching: a Causal Learning Approach

1 code implementation20 Oct 2022 Zhongxiang Sun, Jun Xu, Xiao Zhang, Zhenhua Dong, Ji-Rong Wen

We show that the framework is model-agnostic, and a number of legal case matching models can be applied as the underlying models.

Semantic Text Matching Text Matching

Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

2 code implementations16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Farid Razzak, Ji-Rong Wen, Hui Xiong

To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views.

Representation Learning Self-Supervised Learning

A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language

4 code implementations12 Sep 2022 Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, Ji-Rong Wen

Although artificial intelligence (AI) has made significant progress in understanding molecules in a wide range of fields, existing models generally acquire the single cognitive ability from the single molecular modality.

Contrastive Learning Cross-Modal Retrieval +4

Enhancing User Behavior Sequence Modeling by Generative Tasks for Session Search

1 code implementation23 Aug 2022 Haonan Chen, Zhicheng Dou, Yutao Zhu, Zhao Cao, Xiaohua Cheng, Ji-Rong Wen

To help the encoding of the current user behavior sequence, we propose to use a decoder and the information of future sequences and a supplemental query.

Session Search

Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer

no code implementations19 Aug 2022 Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitian Zhang, Ji-Rong Wen

In order to unify these two stages, we explore a model-based indexer for document retrieval.

Retrieval

Modeling Two-Way Selection Preference for Person-Job Fit

1 code implementation18 Aug 2022 Chen Yang, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen, Wayne Xin Zhao

To model the two-way selection preference from the dual-perspective of job seekers and employers, we incorporate two different nodes for each candidate (or job) and characterize both successful matching and failed matching via a unified dual-perspective interaction graph.

Contrastive Learning Graph Representation Learning +1

Multimodal foundation models are better simulators of the human brain

1 code implementation17 Aug 2022 Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen

Further, from the perspective of neural encoding (based on our foundation model), we find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.

STAR-GNN: Spatial-Temporal Video Representation for Content-based Retrieval

no code implementations15 Aug 2022 Guoping Zhao, Bingqing Zhang, Mingyu Zhang, Yaxian Li, Jiajun Liu, Ji-Rong Wen

It models a video with a lattice feature graph in which the nodes represent regions of different granularity, with weighted edges that represent the spatial and temporal links.

Representation Learning Retrieval +1

MVP: Multi-task Supervised Pre-training for Natural Language Generation

2 code implementations24 Jun 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation.

Text Generation

Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning

1 code implementation19 Jun 2022 Xiaolei Wang, Kun Zhou, Ji-Rong Wen, Wayne Xin Zhao

Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.

Language Modelling Recommendation Systems +1

RecBole 2.0: Towards a More Up-to-Date Recommendation Library

2 code implementations15 Jun 2022 Wayne Xin Zhao, Yupeng Hou, Xingyu Pan, Chen Yang, Zeyu Zhang, Zihan Lin, Jingsen Zhang, Shuqing Bian, Jiakai Tang, Wenqi Sun, Yushuo Chen, Lanling Xu, Gaowei Zhang, Zhen Tian, Changxin Tian, Shanlei Mu, Xinyan Fan, Xu Chen, Ji-Rong Wen

In order to support the study of recent advances in recommender systems, this paper presents an extended recommendation library consisting of eight packages for up-to-date topics and architectures.

Benchmarking Data Augmentation +3

JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding

1 code implementation13 Jun 2022 Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen

Considering the complex nature of mathematical texts, we design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.

Language Modelling Math

Towards Universal Sequence Representation Learning for Recommender Systems

1 code implementation13 Jun 2022 Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

In order to develop effective sequential recommenders, a series of sequence representation learning (SRL) methods are proposed to model historical user behaviors.

Recommendation Systems Representation Learning

Feature-aware Diversified Re-ranking with Disentangled Representations for Relevant Recommendation

no code implementations10 Jun 2022 Zihan Lin, Hui Wang, Jingshu Mao, Wayne Xin Zhao, Cheng Wang, Peng Jiang, Ji-Rong Wen

Relevant recommendation is a special recommendation scenario which provides relevant items when users express interests on one target item (e. g., click, like and purchase).

Disentanglement Re-Ranking

Negative Sampling for Contrastive Representation Learning: A Review

no code implementations1 Jun 2022 Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen

The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.

Graph Learning Information Retrieval +2

Learning to Transfer Prompts for Text Generation

1 code implementation NAACL 2022 Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, Wayne Xin Zhao

First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.

Text Generation

Debiased Contrastive Learning of Unsupervised Sentence Representations

1 code implementation ACL 2022 Kun Zhou, Beichen Zhang, Wayne Xin Zhao, Ji-Rong Wen

In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.

Contrastive Learning Semantic Textual Similarity +1

A Thorough Examination on Zero-shot Dense Retrieval

no code implementations27 Apr 2022 Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen

Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).

Retrieval

COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

no code implementations CVPR 2022 Haoyu Lu, Nanyi Fei, Yuqi Huo, Yizhao Gao, Zhiwu Lu, Ji-Rong Wen

Under a fair comparison setting, our COTS achieves the highest performance among all two-stream methods and comparable performance (but with 10, 800X faster in inference) w. r. t.

Contrastive Learning Cross-Modal Retrieval +5

Leveraging Search History for Improving Person-Job Fit

no code implementations27 Mar 2022 Yupeng Hou, Xingyu Pan, Wayne Xin Zhao, Shuqing Bian, Yang song, Tao Zhang, Ji-Rong Wen

As the core technique of online recruitment platforms, person-job fit can improve hiring efficiency by accurately matching job positions with qualified candidates.

Text Matching

Learning to Answer Questions in Dynamic Audio-Visual Scenarios

1 code implementation CVPR 2022 Guangyao Li, Yake Wei, Yapeng Tian, Chenliang Xu, Ji-Rong Wen, Di Hu

In this paper, we focus on the Audio-Visual Question Answering (AVQA) task, which aims to answer questions regarding different visual objects, sounds, and their associations in videos.

audio-visual learning Audio-visual Question Answering +4

MISC: A MIxed Strategy-Aware Model Integrating COMET for Emotional Support Conversation

1 code implementation ACL 2022 Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, Rui Yan

Applying existing methods to emotional support conversation -- which provides valuable assistance to people who are in need -- has two major limitations: (a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user's instant mental state; (b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user's distress.

Neural Graph Matching for Pre-training Graph Neural Networks

1 code implementation3 Mar 2022 Yupeng Hou, Binbin Hu, Wayne Xin Zhao, Zhiqiang Zhang, Jun Zhou, Ji-Rong Wen

In this way, we can learn adaptive representations for a given graph when paired with different graphs, and both node- and graph-level characteristics are naturally considered in a single pre-training task.

Graph Matching

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

2 code implementations COLING 2022 Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, Zhong-Yi Lu, Ji-Rong Wen

Recently, Mixture-of-Experts (short as MoE) architecture has achieved remarkable success in increasing the model capacity of large-scale language models.

Language Modelling Multi-Task Learning +2

Filter-enhanced MLP is All You Need for Sequential Recommendation

2 code implementations28 Feb 2022 Kun Zhou, Hui Yu, Wayne Xin Zhao, Ji-Rong Wen

Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation.

Sequential Recommendation

Measuring "Why" in Recommender Systems: a Comprehensive Survey on the Evaluation of Explainable Recommendation

no code implementations14 Feb 2022 Xu Chen, Yongfeng Zhang, Ji-Rong Wen

Beyond summarizing the previous work, we also analyze the (dis)advantages of existing evaluation methods and provide a series of guidelines on how to select them.

Explainable Recommendation Persuasiveness +1

A Model-Agnostic Causal Learning Framework for Recommendation using Search Data

1 code implementation9 Feb 2022 Zihua Si, Xueran Han, Xiao Zhang, Jun Xu, Yue Yin, Yang song, Ji-Rong Wen

In this paper, we propose a model-agnostic framework named IV4Rec that can effectively decompose the embedding vectors into these two parts, hence enhancing recommendation results.

Recommendation Systems

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

1 code implementation4 Feb 2022 Mingguo He, Zhewei Wei, Ji-Rong Wen

GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions.

GPR Graph Learning +1

Context-Tuning: Learning Contextualized Prompts for Natural Language Generation

1 code implementation COLING 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Secondly, we use continuous inverse prompting to improve the process of natural language generation by modeling an inverse generation process from output to input, making the generated text more relevant to the inputs.

Text Generation

Pretrained Language Models for Text Generation: A Survey

no code implementations14 Jan 2022 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties.

Text Generation

Class-aware Sounding Objects Localization via Audiovisual Correspondence

1 code implementation22 Dec 2021 Di Hu, Yake Wei, Rui Qian, Weiyao Lin, Ruihua Song, Ji-Rong Wen

To address this problem, we propose a two-stage step-by-step learning framework to localize and recognize sounding objects in complex audiovisual scenarios using only the correspondence between audio and vision.

Object object-detection +3

Compressed Video Contrastive Learning

no code implementations NeurIPS 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, Ping Luo

To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa.

Contrastive Learning Representation Learning

PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling

1 code implementation24 Nov 2021 Yujia Zhou, Zhicheng Dou, Yutao Zhu, Ji-Rong Wen

Personalized search plays a crucial role in improving user search experience owing to its ability to build user profiles based on historical behaviors.

Self-Supervised Learning Sentence

Towards artificial general intelligence via a multimodal foundation model

1 code implementation27 Oct 2021 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen

To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks.

Image Classification Reading Comprehension +2

Log-Polar Space Convolution

no code implementations29 Sep 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Image Dataset Compression Based on Matrix Product States

no code implementations29 Sep 2021 Ze-Feng Gao, Peiyu Liu, Xiao-Hui Zhang, Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

Based on the MPS structure, we propose a new dataset compression method that compresses datasets by filtering long-range correlation information in task-agnostic scenarios and uses dataset distillation to supplement the information in task-specific scenarios.

One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, Ji-Rong Wen

Specifically, leveraging the benefits of Transformer on language understanding, we train a personalized language model to construct a general user profile from the user's historical responses.

Chatbot Language Modelling

Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Wei Xu, Xinyu Zhang, Hao Jiang, Zhao Cao, Ji-Rong Wen

In this paper, we propose to leverage the large-scale hyperlinks and anchor texts to pre-train the language model for ad-hoc retrieval.

Language Modelling Retrieval

Learning Implicit User Profiles for Personalized Retrieval-Based Chatbot

1 code implementation18 Aug 2021 Hongjin Qian, Zhicheng Dou, Yutao Zhu, Yueyuan Ma, Ji-Rong Wen

To learn a user's personalized language style, we elaborately build language models from shallow to deep using the user's historical responses; To model a user's personalized preferences, we explore the conditional relations underneath each post-response pair of the user.

Chatbot Retrieval

Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm

no code implementations12 Aug 2021 Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen

Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.

Document Ranking Information Retrieval +3

Self-supervised Audiovisual Representation Learning for Remote Sensing Data

1 code implementation2 Aug 2021 Konrad Heidler, Lichao Mou, Di Hu, Pu Jin, Guangyao Li, Chuang Gan, Ji-Rong Wen, Xiao Xiang Zhu

By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pre-training strategies for remote sensing imagery.

Cross-Modal Retrieval Representation Learning +1

A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations

no code implementations ACL 2021 Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan

Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e. g., documents) when conversing with humans.

Language Modelling Retrieval +1

Log-Polar Space Convolution for Convolutional Neural Networks

1 code implementation26 Jul 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Curriculum Pre-Training Heterogeneous Subgraph Transformer for Top-$N$ Recommendation

no code implementations12 Jun 2021 Hui Wang, Kun Zhou, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen

Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in top-$N$ recommender systems, called \emph{HIN-based recommendation}.

Recommendation Systems

A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech

1 code implementation ACL 2021 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Nianwen Xue, Ji-Rong Wen

A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer.

Discourse Parsing

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

1 code implementation ACL 2021 Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.

Language Modelling Model Compression

Pretrained Language Models for Text Generation: A Survey

no code implementations21 May 2021 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation.

Text Generation

Knowledge-based Review Generation by Coherence Enhanced Text Planning

no code implementations9 May 2021 Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen

For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.

Informativeness Knowledge Graphs +3

Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals

1 code implementation11 Jan 2021 Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen

In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.

Knowledge Base Question Answering Semantic Parsing

TextBox: A Unified, Modularized, and Extensible Framework for Text Generation

1 code implementation ACL 2021 Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework.

Text Generation

Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw

no code implementations1 Jan 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Zhiwu Lu, Tao Xiang, Ji-Rong Wen, Ziyuan Huang, Jianwen Jiang, Shiwei Zhang, Mingqian Tang, Songfang Huang, Ping Luo

With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels.

Representation Learning

RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms

1 code implementation3 Nov 2020 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen

In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.

Collaborative Filtering Sequential Recommendation

Scalable Graph Neural Networks via Bidirectional Propagation

1 code implementation NeurIPS 2020 Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, Ji-Rong Wen

Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1. 8 billion edges in less than half an hour on a single machine.

Graph Sampling

Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields

1 code implementation Findings of the Association for Computational Linguistics 2020 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Ji-Rong Wen, Nianwen Xue

Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.

Machine Translation Translation

Pchatbot: A Large-Scale Dataset for Personalized Chatbot

2 code implementations28 Sep 2020 Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, Ji-Rong Wen

This enables the development of personalized dialogue models that directly learn implicit user personality from the user's dialogue history.

Chatbot

Learning to Match Jobs with Resumes from Sparse Interaction Data using Multi-View Co-Teaching Network

no code implementations25 Sep 2020 Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen

Compared with pure text-based matching models, the proposed approach is able to learn better data representations from limited or even sparse interaction data, which is more resistible to noise in training data.

Text Matching

Leveraging Historical Interaction Data for Improving Conversational Recommender System

no code implementations19 Aug 2020 Kun Zhou, Wayne Xin Zhao, Hui Wang, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

Most of the existing CRS methods focus on learning effective preference representations for users from conversation data alone.

Attribute Recommendation Systems

S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

2 code implementations18 Aug 2020 Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture.

Attribute Self-Supervised Learning +1

Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient

no code implementations25 Jul 2020 Haonan Jia, Xiao Zhang, Jun Xu, Wei Zeng, Hao Jiang, Xiaohui Yan, Ji-Rong Wen

Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance, resulting in unstable training and poor sampling efficiency.

Q-Learning reinforcement-learning +1

Counterfactual VQA: A Cause-Effect Look at Language Bias

1 code implementation CVPR 2021 Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, Ji-Rong Wen

VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.

counterfactual Counterfactual Inference +2

Domain-Adaptive Few-Shot Learning

1 code implementation19 Mar 2020 An Zhao, Mingyu Ding, Zhiwu Lu, Tao Xiang, Yulei Niu, Jiechao Guan, Ji-Rong Wen, Ping Luo

Existing few-shot learning (FSL) methods make the implicit assumption that the few target class samples are from the same domain as the source class samples.

Domain Adaptation Few-Shot Learning

AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning

no code implementations28 Feb 2020 Jianhong Zhang, Manli Zhang, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

To address this problem, we propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove the irrelevant images.

Denoising Few-Shot Learning +1

Improving Multi-Turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

no code implementations18 Feb 2020 Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, Jingsong Yu

Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters.

Retrieval

Meta-Learning across Meta-Tasks for Few-Shot Learning

no code implementations11 Feb 2020 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Jia Tian, Tao Xiang, Ji-Rong Wen

In this paper, we argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.

Domain Adaptation Few-Shot Learning +1

Few-Shot Learning as Domain Adaptation: Algorithm and Analysis

no code implementations6 Feb 2020 Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

Specifically, armed with a set transformer based attention module, we construct each episode with two sub-episodes without class overlap on the seen classes to simulate the domain shift between the seen and unseen classes.

Domain Adaptation Few-Shot Image Classification +1

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

2 code implementations12 Dec 2019 Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.

Information Retrieval Learning-To-Rank +1

Domain Adaptation for Person-Job Fit with Transferable Deep Global Match Network

no code implementations IJCNLP 2019 Shuqing Bian, Wayne Xin Zhao, Yang song, Tao Zhang, Ji-Rong Wen

Furthermore, we extend the match network and implement domain adaptation in three levels, sentence-level representation, sentence-level match, and global match.

Domain Adaptation Sentence

Mobile Video Action Recognition

no code implementations27 Aug 2019 Yuqi Huo, Xiaoli Xu, Yao Lu, Yulei Niu, Zhiwu Lu, Ji-Rong Wen

In addition to motion vectors, we also provide a temporal fusion method to explicitly induce the temporal context.

Action Recognition Temporal Action Localization

Personalizing Search Results Using Hierarchical RNN with Query-aware Attention

no code implementations20 Aug 2019 Songwei Ge, Zhicheng Dou, Zhengbao Jiang, Jian-Yun Nie, Ji-Rong Wen

Our analysis reveals that the attention model is able to attribute higher weights to more related past sessions after fine training.

Attribute

Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN

no code implementations12 Jul 2019 Guoping Zhao, Mingyu Zhang, Jiajun Liu, Ji-Rong Wen

Such tendency indicates that the model indeed learned how to toy with both image retrieval systems and human eyes.

Image Classification Image Retrieval +1

Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding

1 code implementation ACL 2019 Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, Yang song

In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-to-fine generation process.

Review Generation Sentence +1

A Long-Short Demands-Aware Model for Next-Item Recommendation

no code implementations12 Feb 2019 Ting Bai, Pan Du, Wayne Xin Zhao, Ji-Rong Wen, Jian-Yun Nie

Recommending the right products is the central problem in recommender systems, but the right products should also be recommended at the right time to meet the demands of users, so as to maximize their values.

Recommendation Systems

Zero-Shot Learning with Sparse Attribute Propagation

no code implementations11 Dec 2018 Nanyi Fei, Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen/unseen classes (attribute vector is the most widely used).

Attribute Image Retrieval +1

Recursive Visual Attention in Visual Dialog

1 code implementation CVPR 2019 Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, Ji-Rong Wen

Visual dialog is a challenging vision-language task, which requires the agent to answer multi-round questions about an image.

Question Answering Visual Dialog +1

Zero and Few Shot Learning with Semantic Feature Synthesis and Competitive Learning

no code implementations19 Oct 2018 Zhiwu Lu, Jiechao Guan, Aoxue Li, Tao Xiang, An Zhao, Ji-Rong Wen

Specifically, we assume that each synthesised data point can belong to any unseen class; and the most likely two class candidates are exploited to learn a robust projection function in a competitive fashion.

Attribute Few-Shot Learning +2

Transferrable Feature and Projection Learning with Class Hierarchy for Zero-Shot Learning

no code implementations19 Oct 2018 Aoxue Li, Zhiwu Lu, Jiechao Guan, Tao Xiang, Li-Wei Wang, Ji-Rong Wen

Inspired by the fact that an unseen class is not exactly `unseen' if it belongs to the same superclass as a seen class, we propose a novel inductive ZSL model that leverages superclasses as the bridge between seen and unseen classes to narrow the domain gap.

Attribute Clustering +2

KB4Rec: A Dataset for Linking Knowledge Bases with Recommender Systems

1 code implementation30 Jul 2018 Wayne Xin Zhao, Gaole He, Hongjian Dou, Jin Huang, Siqi Ouyang, Ji-Rong Wen

Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i. e. popularity and recency) on whether a RS item can be linked to a KB entity.

Knowledge-Aware Recommendation

RUM: network Representation learning throUgh Multi-level structural information preservation

no code implementations8 Oct 2017 Yanlei Yu, Zhiwu Lu, Jiajun Liu, Guoping Zhao, Ji-Rong Wen, Kai Zheng

We propose a novel network representations learning model framework called RUM (network Representation learning throUgh Multi-level structural information preservation).

Representation Learning

Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation

no code implementations5 Sep 2017 Yulei Niu, Zhiwu Lu, Ji-Rong Wen, Tao Xiang, Shih-Fu Chang

In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept; 2) how to annotate an image with the optimal number of class labels.

Zero-Shot Fine-Grained Classification by Deep Feature Learning with Semantics

no code implementations4 Jul 2017 Aoxue Li, Zhiwu Lu, Li-Wei Wang, Tao Xiang, Xinqi Li, Ji-Rong Wen

In this paper, to address the two issues, we propose a two-phase framework for recognizing images from unseen fine-grained classes, i. e. zero-shot fine-grained classification.

Classification Domain Adaptation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.