Search Results for author: Qianren Mao

Found 9 papers, 5 papers with code

Noise-injected Consistency Training and Entropy-constrained Pseudo Labeling for Semi-supervised Extractive Summarization

1 code implementation COLING 2022 Yiming Wang, Qianren Mao, Junnan Liu, Weifeng Jiang, Hongdong Zhu, JianXin Li

Labeling large amounts of extractive summarization data is often prohibitive expensive due to time, financial, and expertise constraints, which poses great challenges to incorporating summarization system in practical applications.

Extractive Summarization

Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction

no code implementations18 Apr 2024 Qian Li, Cheng Ji, Shu Guo, Yong Zhao, Qianren Mao, Shangguang Wang, Yuntao Wei, JianXin Li

Existing methods are limited by their neglect of the multiple entity pairs in one sentence sharing very similar contextual information (ie, the same text and image), resulting in increased difficulty in the MMRE task.

Relation Relation Extraction +1

FedCQA: Answering Complex Queries on Multi-Source Knowledge Graphs via Federated Learning

no code implementations22 Feb 2024 Qi Hu, Weifeng Jiang, Haoran Li, ZiHao Wang, Jiaxin Bai, Qianren Mao, Yangqiu Song, Lixin Fan, JianXin Li

An entity can be involved in various knowledge graphs and reasoning on multiple KGs and answering complex queries on multi-source KGs is important in discovering knowledge cross graphs.

Complex Query Answering Federated Learning +2

Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders

1 code implementation29 Oct 2023 Qianren Mao, Shaobo Zhao, Jiarui Li, Xiaolei Gu, Shizhu He, Bo Li, JianXin Li

Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization.

Extractive Summarization Sentence +2

Neural-Hidden-CRF: A Robust Weakly-Supervised Sequence Labeler

1 code implementation10 Sep 2023 Zhijun Chen, Hailong Sun, Wanhao Zhang, Chunyi Xu, Qianren Mao, Pengpeng Chen

In Neural-Hidden-CRF, we can capitalize on the powerful language model BERT or other deep models to provide rich contextual semantic knowledge to the latent ground truth sequence, and use the hidden CRF layer to capture the internal label dependencies.

Language Modelling

Attend and select: A segment selective transformer for microblog hashtag generation

1 code implementation6 Jun 2021 Qianren Mao, Xi Li, Bang Liu, Shu Guo, Peng Hao, JianXin Li, Lihong Wang

These tokens or phrases may originate from primary fragmental textual pieces (e. g., segments) in the original text and are separated into different segments.

CNTLS: A Benchmark Dataset for Abstractive or Extractive Chinese Timeline Summarization

no code implementations29 May 2021 Qianren Mao, Jiazheng Wang, Zheng Wang, Xi Li, Bo Li, JianXin Li

We meticulously analyze the corpus using well-known metrics, focusing on the style of the summaries and the complexity of the summarization task.

Information Retrieval Retrieval +3

Noised Consistency Training for Text Summarization

no code implementations28 May 2021 Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li

In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.

Abstractive Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.