Search Results for author: Minghui Qiu

Found 59 papers, 34 papers with code

Wasserstein Selective Transfer Learning for Cross-domain Text Mining

no code implementations EMNLP 2021 Lingyun Feng, Minghui Qiu, Yaliang Li, Haitao Zheng, Ying Shen

However, the source and target domains usually have different data distributions, which may lead to negative transfer.

Transfer Learning

Meta Distant Transfer Learning for Pre-trained Language Models

no code implementations EMNLP 2021 Chengyu Wang, Haojie Pan, Minghui Qiu, Jun Huang, Fei Yang, Yin Zhang

For tasks related to distant domains with different class label sets, PLMs may memorize non-transferable knowledge for the target domain and suffer from negative transfer.

Implicit Relations Meta-Learning +2

UnClE: Explicitly Leveraging Semantic Similarity to Reduce the Parameters of Word Embeddings

no code implementations Findings (EMNLP) 2021 Zhi Li, Yuchen Zhai, Chengyu Wang, Minghui Qiu, Kailiang Li, Yin Zhang

Inspired by the fact that words with similar semantic can share a part of weights, we divide the embeddings of words into two parts: unique embedding and class embedding.

Language Modelling Semantic Similarity +2

CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR

no code implementations22 Mar 2024 Minghui Qiu, Yandao Huang, Lin Chen, Lu Wang, Kaishun Wu

In recent years, emerging research on mobile sensing has led to novel scenarios that enhance daily life for humans, but dynamic usage conditions often result in performance degradation when systems are deployed in real-world settings.

Active Learning Domain Adaptation +2

Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding

no code implementations12 Nov 2023 Ruyao Xu, Taolin Zhang, Chengyu Wang, Zhongjie Duan, Cen Chen, Minghui Qiu, Dawei Cheng, Xiaofeng He, Weining Qian

In the experiments, we evaluate KANGAROO over various knowledge-aware and general NLP tasks in both full and few-shot learning settings, outperforming various KEPLM training paradigms performance in closed-domains significantly.

Contrastive Learning Data Augmentation +4

FashionLOGO: Prompting Multimodal Large Language Models for Fashion Logo Embeddings

1 code implementation17 Aug 2023 Yulin Su, Min Yang, Minghui Qiu, Jing Wang, Tao Wang

Logo embedding plays a crucial role in various e-commerce applications by facilitating image retrieval or recognition, such as intellectual property protection and product search.

Image Retrieval Optical Character Recognition (OCR)

Valley: Video Assistant with Large Language model Enhanced abilitY

1 code implementation12 Jun 2023 Ruipu Luo, Ziwang Zhao, Min Yang, Junwei DOng, Da Li, Pengcheng Lu, Tao Wang, Linmei Hu, Minghui Qiu, Zhongyu Wei

Large language models (LLMs), with their remarkable conversational capabilities, have demonstrated impressive performance across various applications and have emerged as formidable AI assistants.

Action Recognition Instruction Following +4

Meta-Learning Siamese Network for Few-Shot Text Classification

1 code implementation5 Feb 2023 Chengcheng Han, Yuhe Wang, Yingnan Fu, Xiang Li, Minghui Qiu, Ming Gao, Aoying Zhou

Few-shot learning has been used to tackle the problem of label scarcity in text classification, of which meta-learning based methods have shown to be effective, such as the prototypical networks (PROTO).

Descriptive Few-Shot Learning +2

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding

1 code implementation16 Oct 2022 Jianing Wang, Wenkang Huang, Qiuhui Shi, Hongbin Wang, Minghui Qiu, Xiang Li, Ming Gao

In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM.

Language Modelling Natural Language Understanding

Towards Unified Prompt Tuning for Few-shot Text Classification

1 code implementation11 May 2022 Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.

Few-Shot Learning Few-Shot Text Classification +4

KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering

1 code implementation6 May 2022 Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao

Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).

Contrastive Learning Extractive Question-Answering +5

Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System

1 code implementation19 Apr 2022 Ding Zou, Wei Wei, Xian-Ling Mao, Ziyang Wang, Minghui Qiu, Feida Zhu, Xin Cao

Different from traditional contrastive learning methods which generate two graph views by uniform data augmentation schemes such as corruption or dropping, we comprehensively consider three different graph views for KG-aware recommendation, including global-level structural view, local-level collaborative and semantic views.

Contrastive Learning Data Augmentation +2

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

1 code implementation1 Apr 2022 Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang

Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.

Contrastive Learning

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

1 code implementation EMNLP 2021 Chenhe Dong, Yaliang Li, Ying Shen, Minghui Qiu

In this paper, we target to compress PLMs with knowledge distillation, and propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.

Few-Shot Learning Knowledge Distillation +2

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

2 code implementations ACL 2021 Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He

Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.

Language Modelling Natural Language Inference +1

Global Context Enhanced Graph Neural Networks for Session-based Recommendation

2 code implementations9 Jun 2021 Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xian-Ling Mao, Minghui Qiu

In GCE-GNN, we propose a novel global-level item representation learning layer, which employs a session-aware attention mechanism to recursively incorporate the neighbors' embeddings of each node on the global graph.

Representation Learning Session-Based Recommendations

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

1 code implementation CVPR 2021 Mingchen Zhuge, Dehong Gao, Deng-Ping Fan, Linbo Jin, Ben Chen, Haoming Zhou, Minghui Qiu, Ling Shao

We present a new vision-language (VL) pre-training model dubbed Kaleido-BERT, which introduces a novel kaleido strategy for fashion cross-modality representations from transformers.

Image Retrieval Retrieval +1

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

no code implementations20 Jan 2021 Lingyun Feng, Minghui Qiu, Yaliang Li, Hai-Tao Zheng, Ying Shen

Despite pre-trained language models such as BERT have achieved appealing performance in a wide range of natural language processing tasks, they are computationally expensive to be deployed in real-time applications.

Knowledge Distillation

Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations

no code implementations25 Nov 2020 Haojie Pan, Cen Chen, Chengyu Wang, Minghui Qiu, Liu Yang, Feng Ji, Jun Huang

More specifically, we propose a reinforced selector to extract useful PRF terms to enhance response candidates and a BERT-based response ranker to rank the PRF-enhanced responses.

Exploring Global Information for Session-based Recommendation

no code implementations20 Nov 2020 Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xian-Ling Mao, Minghui Qiu, Shanshan Feng

Based on BGNN, we propose a novel approach, called Session-based Recommendation with Global Information (SRGI), which infers the user preferences via fully exploring global item-transitions over all sessions from two different perspectives: (i) Fusion-based Model (SRGI-FM), which recursively incorporates the neighbor embeddings of each node on global graph into the learning process of session level item representation; and (ii) Constrained-based Model (SRGI-CM), which treats the global-level item-transition information as a constraint to ensure the learned item embeddings are consistent with the global item-transition.

Session-Based Recommendations

EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

2 code implementations18 Nov 2020 Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin

The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.

Compiler Optimization Conversational Question Answering +1

One-shot Text Field Labeling using Attention and Belief Propagation for Structure Information Extraction

1 code implementation9 Sep 2020 Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, Wei. Lin

Existing learning based methods for text labeling task usually require a large amount of labeled examples to train a specific model for each type of document.

One-Shot Learning Text Detection

A Comprehensive Analysis of Information Leakage in Deep Transfer Learning

no code implementations4 Sep 2020 Cen Chen, Bingzhe Wu, Minghui Qiu, Li Wang, Jun Zhou

To the best of our knowledge, our study is the first to provide a thorough analysis of the information leakage issues in deep transfer learning methods and provide potential solutions to the issue.

Transfer Learning

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Open-Retrieval Conversational Question Answering

1 code implementation22 May 2020 Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, Mohit Iyyer

We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.

Conversational Question Answering Conversational Search +2

SueNes: A Weakly Supervised Approach to Evaluating Single-Document Summarization via Negative Sampling

1 code implementation NAACL 2022 Forrest Sheng Bao, Hebi Li, Ge Luo, Minghui Qiu, Yinfei Yang, Youbiao He, Cen Chen

Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which cannot well capture semantics nor linguistic quality and require a reference summary which is costly to obtain.

Abstractive Text Summarization Document Embedding +3

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

2 code implementations EMNLP 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.

Few-Shot Learning Language Modelling

KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation Classification

no code implementations25 Feb 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.

General Classification Meta-Learning +2

IART: Intent-aware Response Ranking with Transformers in Information-seeking Conversation Systems

1 code implementation3 Feb 2020 Liu Yang, Minghui Qiu, Chen Qu, Cen Chen, Jiafeng Guo, Yongfeng Zhang, W. Bruce Croft, Haiqing Chen

We also perform case studies and analysis of learned user intent and its impact on response ranking in information-seeking conversations to provide interpretation of results.

Representation Learning

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search

1 code implementation13 Jan 2020 Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei. Lin, Jingren Zhou

Motivated by the necessity and benefits of task-oriented BERT compression, we propose a novel compression method, AdaBERT, that leverages differentiable Neural Architecture Search to automatically compress BERT into task-adaptive small models for specific tasks.

Knowledge Distillation Neural Architecture Search

Attentive History Selection for Conversational Question Answering

2 code implementations26 Aug 2019 Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, Mohit Iyyer

First, we propose a positional history answer embedding method to encode conversation history with position information using BERT in a natural way.

Conversational Question Answering Conversational Search +2

A Minimax Game for Instance based Selective Transfer Learning

no code implementations1 Jul 2019 Bo wang, Minghui Qiu, Xisen Wang, Yaliang Li, Yu Gong, Xiaoyi Zeng, Jung Huang, Bo Zheng, Deng Cai, Jingren Zhou

To the best of our knowledge, this is the first to build a minimax game based model for selective transfer learning.

Retrieval Text Retrieval +1

BERT with History Answer Embedding for Conversational Question Answering

1 code implementation14 May 2019 Chen Qu, Liu Yang, Minghui Qiu, W. Bruce Croft, Yongfeng Zhang, Mohit Iyyer

One of the major challenges to multi-turn conversational search is to model the conversation history to answer the current question.

Conversational Question Answering Conversational Search +2

A Hybrid Retrieval-Generation Neural Conversation Model

1 code implementation19 Apr 2019 Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W. Bruce Croft, Xiaodong Liu, Yelong Shen, Jingjing Liu

In this paper, we propose a hybrid neural conversation model that combines the merits of both response retrieval and generation methods.

Retrieval Text Generation +1

User Intent Prediction in Information-seeking Conversations

1 code implementation11 Jan 2019 Chen Qu, Liu Yang, Bruce Croft, Yongfeng Zhang, Johanne R. Trippas, Minghui Qiu

Due to the limited communication bandwidth in conversational search, it is important for conversational assistants to accurately detect and predict user intent in information-seeking conversations.

Conversational Search Feature Engineering +1

Learning to Selectively Transfer: Reinforced Transfer Learning for Deep Text Matching

no code implementations30 Dec 2018 Chen Qu, Feng Ji, Minghui Qiu, Liu Yang, Zhiyu Min, Haiqing Chen, Jun Huang, W. Bruce Croft

Specifically, the data selector "acts" on the source domain data to find a subset for optimization of the TL model, and the performance of the TL model can provide "rewards" in turn to update the selector.

Information Retrieval Natural Language Inference +5

Review Helpfulness Prediction with Embedding-Gated CNN

no code implementations29 Aug 2018 Cen Chen, Minghui Qiu, Yinfei Yang, Jun Zhou, Jun Huang, Xiaolong Li, Forrest Bao

Product reviews, in the form of texts dominantly, significantly help consumers finalize their purchasing decisions.

Sentence

Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems

1 code implementation1 May 2018 Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W. Bruce Croft, Jun Huang, Haiqing Chen

Our models and research findings provide new insights on how to utilize external knowledge with deep neural models for response selection and have implications for the design of the next generation of information-seeking conversation systems.

Knowledge Distillation Retrieval +1

Analyzing and Characterizing User Intent in Information-seeking Conversations

no code implementations23 Apr 2018 Chen Qu, Liu Yang, W. Bruce Croft, Johanne R. Trippas, Yongfeng Zhang, Minghui Qiu

Understanding and characterizing how people interact in information-seeking conversations is crucial in developing conversational search systems.

Conversational Search Question Answering

Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce

1 code implementation23 Nov 2017 Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, Haiqing Chen

In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource- poor target domain.

Chatbot Natural Language Inference +5

AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine

no code implementations ACL 2017 Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, Wei Chu

We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models.

Chatbot Information Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.