Search Results for author: Xiaofeng He

Found 27 papers, 10 papers with code

PE: A Poincare Explanation Method for Fast Text Hierarchy Generation

1 code implementation25 Mar 2024 Qian Chen, Xiaofeng He, Hongzhao Li, Hongyu Yi

The black-box nature of deep learning models in NLP hinders their widespread application.

DK-SLAM: Monocular Visual SLAM with Deep Keypoints Adaptive Learning, Tracking and Loop-Closing

no code implementations17 Jan 2024 Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi, Changhao Chen

Unreliable feature extraction and matching in handcrafted features undermine the performance of visual SLAM in complex real-world scenarios.

Pose Estimation

CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem

1 code implementation13 Dec 2023 Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He

The minimal feature removal problem in the post-hoc explanation area aims to identify the minimal feature set (MFS).

From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models

no code implementations12 Nov 2023 Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang

Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.

Language Modelling Logical Reasoning

Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding

no code implementations12 Nov 2023 Ruyao Xu, Taolin Zhang, Chengyu Wang, Zhongjie Duan, Cen Chen, Minghui Qiu, Dawei Cheng, Xiaofeng He, Weining Qian

In the experiments, we evaluate KANGAROO over various knowledge-aware and general NLP tasks in both full and few-shot learning settings, outperforming various KEPLM training paradigms performance in closed-domains significantly.

Contrastive Learning Data Augmentation +4

GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark

no code implementations11 May 2023 Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang, Xiaofeng He

With a fast developing pace of geographic applications, automatable and intelligent models are essential to be designed to handle the large volume of information.

Entity Alignment Natural Language Understanding

SelfOdom: Self-supervised Egomotion and Depth Learning via Bi-directional Coarse-to-Fine Scale Recovery

no code implementations16 Nov 2022 Hao Qu, Lilian Zhang, Xiaoping Hu, Xiaofeng He, Xianfei Pan, Changhao Chen

To address this, we propose SelfOdom, a self-supervised dual-network framework that can robustly and consistently learn and generate pose and depth estimates in global scale from monocular images.

Autonomous Driving Self-Learning +1

Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training

1 code implementation11 Oct 2022 Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He

Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.

Knowledge Graphs Language Modelling +2

HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2022 Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He

In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.

Contrastive Learning Data Augmentation +3

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

2 code implementations ACL 2021 Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He

Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.

Language Modelling Natural Language Inference +1

TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning

2 code implementations17 May 2021 Lu Wang, xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei zhang, Xiaofeng He, Le Song, Jingren Zhou, Hongxia Yang

Secondly, on top of the proposed graph transformer, we introduce a two-stream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a co-attentional transformer to model inter-dependencies at a semantic level.

Contrastive Learning Graph Learning +2

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

2 code implementations EMNLP 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.

Few-Shot Learning Language Modelling

KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation Classification

no code implementations25 Feb 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.

General Classification Meta-Learning +2

HMRL: Hyper-Meta Learning for Sparse Reward Reinforcement Learning Problem

no code implementations11 Feb 2020 Yun Hua, Xiangfeng Wang, Bo Jin, Wenhao Li, Junchi Yan, Xiaofeng He, Hongyuan Zha

In spite of the success of existing meta reinforcement learning methods, they still have difficulty in learning a meta policy effectively for RL problems with sparse reward.

Meta-Learning Meta Reinforcement Learning +2

Learning Robust Representations with Graph Denoising Policy Network

no code implementations4 Oct 2019 Lu Wang, Wenchao Yu, Wei Wang, Wei Cheng, Wei zhang, Hongyuan Zha, Xiaofeng He, Haifeng Chen

Graph representation learning, aiming to learn low-dimensional representations which capture the geometric dependencies between nodes in the original graph, has gained increasing popularity in a variety of graph analysis tasks, including node classification and link prediction.

Denoising Graph Representation Learning +2

Supervised Reinforcement Learning with Recurrent Neural Network for Dynamic Treatment Recommendation

no code implementations4 Jul 2018 Lu Wang, Wei zhang, Xiaofeng He, Hongyuan Zha

Prior relevant studies recommend treatments either use supervised learning (e. g. matching the indicator signal which denotes doctor prescriptions), or reinforcement learning (e. g. maximizing evaluation signal which indicates cumulative reward from survival rates).

Recommendation Systems reinforcement-learning +1

Learning Fine-grained Relations from Chinese User Generated Categories

no code implementations EMNLP 2017 Chengyu Wang, Yan Fan, Xiaofeng He, Aoying Zhou

User generated categories (UGCs) are short texts that reflect how people describe and organize entities, expressing rich semantic relations implicitly.

Graph Mining Relation Extraction +1

Transductive Non-linear Learning for Chinese Hypernym Prediction

no code implementations ACL 2017 Chengyu Wang, Junchi Yan, Aoying Zhou, Xiaofeng He

Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc.

Relation Extraction Transductive Learning

Chinese Hypernym-Hyponym Extraction from User Generated Categories

no code implementations COLING 2016 Chengyu Wang, Xiaofeng He

Hypernym-hyponym ({``}is-a{''}) relations are key components in taxonomies, object hierarchies and knowledge graphs.

Knowledge Graphs Machine Translation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.