Search Results for author: Tianyong Hao

Found 6 papers, 0 papers with code

A Self-supervised Joint Training Framework for Document Reranking

no code implementations Findings (NAACL) 2022 Xiaozhi Zhu, Tianyong Hao, Sijie Cheng, Fu Lee Wang, Hai Liu

Pretrained language models such as BERT have been successfully applied to a wide range of natural language processing tasks and also achieved impressive performance in document reranking tasks.

Language Modelling Passage Ranking +1

中文糖尿病问题分类体系及标注语料库构建研究(The Construction of Question Taxonomy and An Annotated Chinese Corpus for Diabetes Question Classification)

no code implementations CCL 2022 Xiaobo Qian, Wenxiu Xie, Shaopei Long, Murong Lan, Yuanyuan Mu, Tianyong Hao

“糖尿病作为一种典型慢性疾病已成为全球重大公共卫生挑战之一。随着互联网的快速发展, 庞大的二型糖尿病患者和高危人群对糖尿病专业信息获取的需求日益突出, 糖尿病自动问答服务对患者和高危人群的日常健康服务也发挥着越来越重要的作用, 然而存在缺乏细粒度分类等突出问题。本文设计了一个表示用户意图的新型糖尿病问题分类体系, 包括6个大类和23个细类。基于该体系, 本文从两个专业医疗问答网站爬取并构建了一个包含122732个问答对的中文糖尿病问答语料库DaCorp, 同时对其中的8000个糖尿病问题进行人工标注, 形成一个细粒度的糖尿病标注数据集。此外, 为评估该标注数据集的质量, 本文实现了8个主流基线分类模型。实验结果表明, 最佳分类模型的准确率达到88. 7%, 验证了糖尿病标注数据集及所提分类体系的有效性。Dacorp、糖尿病标注数据集和标注指南已在线发布, 可以免费用于学术研究。”

Deps-SAN: Neural Machine Translation with Dependency-Scaled Self-Attention Network

no code implementations23 Nov 2021 Ru Peng, Nankai Lin, Yi Fang, Shengyi Jiang, Tianyong Hao, BoYu Chen, Junbo Zhao

However, succeeding researches pointed out that limited by the uncontrolled nature of attention computation, the NMT model requires an external syntax to capture the deep syntactic awareness.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.