Search Results for author: Taolin Zhang

Found 12 papers, 8 papers with code

CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem

1 code implementation13 Dec 2023 Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He

The minimal feature removal problem in the post-hoc explanation area aims to identify the minimal feature set (MFS).

Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding

no code implementations12 Nov 2023 Ruyao Xu, Taolin Zhang, Chengyu Wang, Zhongjie Duan, Cen Chen, Minghui Qiu, Dawei Cheng, Xiaofeng He, Weining Qian

In the experiments, we evaluate KANGAROO over various knowledge-aware and general NLP tasks in both full and few-shot learning settings, outperforming various KEPLM training paradigms performance in closed-domains significantly.

Contrastive Learning Data Augmentation +4

From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models

no code implementations12 Nov 2023 Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang

Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.

Language Modelling Logical Reasoning

Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding

no code implementations18 May 2023 Taolin Zhang, Sunan He, Dai Tao, Bin Chen, Zhi Wang, Shu-Tao Xia

In recent years, vision language pre-training frameworks have made significant progress in natural language processing and computer vision, achieving remarkable performance improvement on various downstream tasks.

Contrastive Learning Object +2

Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training

1 code implementation11 Oct 2022 Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He

Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.

Knowledge Graphs Language Modelling +2

FedEgo: Privacy-preserving Personalized Federated Graph Learning with Ego-graphs

1 code implementation29 Aug 2022 Taolin Zhang, Chuan Chen, Yaomin Chang, Lin Shu, Zibin Zheng

As special information carriers containing both structure and feature information, graphs are widely used in graph mining, e. g., Graph Neural Networks (GNNs).

Federated Learning Graph Learning +2

HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2022 Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He

In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.

Contrastive Learning Data Augmentation +3

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

2 code implementations ACL 2021 Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He

Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.

Language Modelling Natural Language Inference +1

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.