Search Results for author: Junfan Chen

Found 9 papers, 6 papers with code

An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity Recognition

no code implementations ACL 2022 Zhuoran Li, Chunming Hu, Xiaohui Guo, Junfan Chen, Wenyi Qin, Richong Zhang

In this study, based on the knowledge distillation framework and multi-task learning, we introduce the similarity metric model as an auxiliary task to improve the cross-lingual NER performance on the target domain.

Cross-Lingual NER Knowledge Distillation +4

Adversarial Word Dilution as Text Data Augmentation in Low-Resource Regime

1 code implementation16 May 2023 Junfan Chen, Richong Zhang, Zheyan Luo, Chunming Hu, Yongyi Mao

Data augmentation is widely used in text classification, especially in the low-resource regime where a few examples for each class are available during training.

Data Augmentation text-classification +1

ContrastNet: A Contrastive Learning Framework for Few-Shot Text Classification

1 code implementation16 May 2023 Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu

Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to identify target classes with knowledge transferred from source classes with sets of small tasks named episodes.

Contrastive Learning Few-Shot Text Classification +2

A Hierarchical N-Gram Framework for Zero-Shot Link Prediction

1 code implementation16 Apr 2022 Mingchen Li, Junfan Chen, Samuel Mensah, Nikolaos Aletras, Xiulong Yang, Yang Ye

Thus, in this paper, we propose a Hierarchical N-Gram framework for Zero-Shot Link Prediction (HNZSLP), which considers the dependencies among character n-grams of the relation surface name for ZSLP.

Knowledge Graphs Link Prediction +1

Neural Dialogue State Tracking with Temporally Expressive Networks

1 code implementation Findings of the Association for Computational Linguistics 2020 Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu

Existing DST models either ignore temporal feature dependencies across dialogue turns or fail to explicitly model temporal state dependencies in a dialogue.

Dialogue State Tracking

Parallel Interactive Networks for Multi-Domain Dialogue State Generation

1 code implementation EMNLP 2020 Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu

In this study, we argue that the incorporation of these dependencies is crucial for the design of MDST and propose Parallel Interactive Networks (PIN) to model these dependencies.

Dialogue State Tracking Multi-domain Dialogue State Tracking

Uncover the Ground-Truth Relations in Distant Supervision: A Neural Expectation-Maximization Framework

1 code implementation IJCNLP 2019 Junfan Chen, Richong Zhang, Yongyi Mao, Hongyu Guo, Jie Xu

Distant supervision for relation extraction enables one to effectively acquire structured relations out of very large text corpora with less human efforts.

Denoising Relation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.