Search Results for author: Philip Yu

Found 19 papers, 5 papers with code

Beyond the Known: Novel Class Discovery for Open-world Graph Learning

no code implementations29 Mar 2024 Yucheng Jin, Yun Xiong, Juncheng Fang, Xixi Wu, Dongxiao He, Xing Jia, Bingchen Zhao, Philip Yu

Inter-class correlations are subsequently eliminated by the prototypical attention network, leading to distinctive representations for different classes.

Graph Learning Node Classification +1

Motif-aware Riemannian Graph Neural Network with Generative-Contrastive Learning

1 code implementation2 Jan 2024 Li Sun, Zhenhao Huang, Zixi Wang, Feiyang Wang, Hao Peng, Philip Yu

In light of the issues above, we propose the problem of \emph{Motif-aware Riemannian Graph Representation Learning}, seeking a numerically stable encoder to capture motif regularity in a diverse-curvature manifold without labels.

Contrastive Learning Graph Representation Learning

A Counterfactual Fair Model for Longitudinal Electronic Health Records via Deconfounder

no code implementations22 Aug 2023 Zheng Liu, Xiaohan Li, Philip Yu

The fairness issue of clinical data modeling, especially on Electronic Health Records (EHRs), is of utmost importance due to EHR's complex latent structure and potential selection bias.

counterfactual Fairness +1

Mitigating Frequency Bias in Next-Basket Recommendation via Deconfounders

no code implementations16 Nov 2022 Xiaohan Li, Zheng Liu, Luyi Ma, Kaushiki Nag, Stephen Guo, Philip Yu, Kannan Achan

Considering the influence of historical purchases on users' future interests, the user and item representations can be viewed as unobserved confounders in the causal diagram.

Causal Inference Fairness +2

Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing

1 code implementation4 Nov 2022 Yibo Wang, Congying Xia, Guan Wang, Philip Yu

In order to handle new entities in product titles and address the special language styles problem of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing.

Entity Typing Natural Language Inference

Mitigating Health Disparities in EHR via Deconfounder

no code implementations28 Oct 2022 Zheng Liu, Xiaohan Li, Philip Yu

First, these methods usually mean a trade-off between the model's performance and fairness.

Attribute Decision Making +1

Pseudo Siamese Network for Few-shot Intent Generation

no code implementations3 May 2021 Congying Xia, Caiming Xiong, Philip Yu

PSN consists of two identical subnetworks with the same structure but different weights: an action network and an object network.

Intent Detection Object +1

Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System

1 code implementation NAACL 2021 Congying Xia, Wenpeng Yin, Yihao Feng, Philip Yu

Two major challenges exist in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes.

Few-Shot Text Classification General Classification +4

Enriching Non-Autoregressive Transformer with Syntactic and Semantic Structures for Neural Machine Translation

no code implementations EACL 2021 Ye Liu, Yao Wan, JianGuo Zhang, Wenting Zhao, Philip Yu

In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.

Machine Translation Translation

Improving Medical NLI Using Context-Aware Domain Knowledge

no code implementations Joint Conference on Lexical and Computational Semantics 2020 Shaika Chowdhury, Philip Yu, Yuan Luo

Domain knowledge is important to understand both the lexical and relational associations of words in natural language text, especially for domain-specific tasks like Natural Language Inference (NLI) in the medical domain, where due to the lack of a large annotated dataset such knowledge cannot be implicitly learned during training.

Natural Language Inference

Multi-label Zero-shot Classification by Learning to Transfer from External Knowledge

no code implementations30 Jul 2020 He Huang, Yuanwei Chen, Wei Tang, Wenhao Zheng, Qing-Guo Chen, Yao Hu, Philip Yu

On the other hand, there is a large semantic gap between seen and unseen classes in the existing multi-label classification datasets.

Classification General Classification +3

CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection

no code implementations4 Apr 2020 Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu

In this paper, we formulate a more realistic and difficult problem setup for the intent detection task in natural language understanding, namely Generalized Few-Shot Intent Detection (GFSID).

Conditional Text Generation Intent Detection +3

MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing

no code implementations COLING 2020 Tao Zhang, Congying Xia, Chun-Ta Lu, Philip Yu

Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types.

Entity Typing

Learn to Forget: Machine Unlearning via Neuron Masking

no code implementations24 Mar 2020 Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren

To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model. In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.

BIG-bench Machine Learning Federated Learning +2

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

no code implementations27 Feb 2020 Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.

Question Answering Sentence +1

Multi-Grained Named Entity Recognition

1 code implementation ACL 2019 Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip Yu

This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested.

Multi-Grained Named Entity Recognition named-entity-recognition +5

Cannot find the paper you are looking for? You can Submit a new open access paper.