Search Results for author: Deming Ye

Found 10 papers, 7 papers with code

Plug-and-Play Knowledge Injection for Pre-trained Language Models

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.

UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language Models

no code implementations2 May 2023 Deming Ye, Yankai Lin, Zhengyan Zhang, Maosong Sun

In this paper, we propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.

Entity Typing named-entity-recognition +2

A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language Models

1 code implementation ACL 2022 Deming Ye, Yankai Lin, Peng Li, Maosong Sun, Zhiyuan Liu

Pre-trained language models (PLMs) cannot well recall rich factual knowledge of entities exhibited in large-scale corpora, especially those rare entities.

Domain Adaptation

Packed Levitated Marker for Entity and Relation Extraction

2 code implementations ACL 2022 Deming Ye, Yankai Lin, Peng Li, Maosong Sun

In particular, we propose a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information.

Joint Entity and Relation Extraction Relation

TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference

1 code implementation NAACL 2021 Deming Ye, Yankai Lin, Yufei Huang, Maosong Sun

To address this issue, we propose a dynamic token reduction approach to accelerate PLMs' inference, named TR-BERT, which could flexibly adapt the layer number of each token in inference to avoid redundant calculation.

Token Reduction

Coreferential Reasoning Learning for Language Representation

2 code implementations EMNLP 2020 Deming Ye, Yankai Lin, Jiaju Du, Zheng-Hao Liu, Peng Li, Maosong Sun, Zhiyuan Liu

Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning.

Relation Extraction

Multi-Paragraph Reasoning with Knowledge-enhanced Graph Neural Network

no code implementations6 Nov 2019 Deming Ye, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Maosong Sun

Multi-paragraph reasoning is indispensable for open-domain question answering (OpenQA), which receives less attention in the current OpenQA systems.

Open-Domain Question Answering

DocRED: A Large-Scale Document-Level Relation Extraction Dataset

4 code implementations ACL 2019 Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Lixin Huang, Jie zhou, Maosong Sun

Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.

Document-level Relation Extraction Relation +1

Rethinking the Form of Latent States in Image Captioning

no code implementations ECCV 2018 Bo Dai, Deming Ye, Dahua Lin

Taking advantage of this, we visually reveal the internal dynamics in the process of caption generation, as well as the connections between input visual domain and output linguistic domain.

Caption Generation Image Captioning

Cannot find the paper you are looking for? You can Submit a new open access paper.