Search Results for author: Fangzhao Wu

Found 99 papers, 30 papers with code

Named Entity Recognition with Context-Aware Dictionary Knowledge

no code implementations CCL 2020 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

In addition, we propose an auxiliary term classification task to predict the types of the matched entity names, and jointly train it with the NER model to fuse both contexts and dictionary knowledge into NER.

named-entity-recognition Named Entity Recognition +1

Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models

1 code implementation21 Dec 2023 Jingwei Yi, Yueqi Xie, Bin Zhu, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu

Based on the evaluation, our work makes a key analysis of the underlying reason for the success of the attack, namely the inability of LLMs to distinguish between instructions and external content and the absence of LLMs' awareness to not execute instructions within external content.

Benchmarking

Towards Attack-tolerant Federated Learning via Critical Parameter Analysis

1 code implementation ICCV 2023 Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha

Federated learning is used to train a shared model in a decentralized way without clients sharing private data with each other.

Federated Learning

FedDefender: Client-Side Attack-Tolerant Federated Learning

1 code implementation18 Jul 2023 Sungwon Park, Sungwon Han, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha

Evaluations of real-world scenarios across multiple datasets show that the proposed method enhances the robustness of federated learning against model poisoning attacks.

Federated Learning Knowledge Distillation +1

FedSampling: A Better Sampling Strategy for Federated Learning

no code implementations25 Jun 2023 Tao Qi, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

In this paper, instead of client uniform sampling, we propose a novel data uniform sampling strategy for federated learning (FedSampling), which can effectively improve the performance of federated learning especially when client data size distribution is highly imbalanced across clients.

Federated Learning Privacy Preserving

Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark

1 code implementation17 May 2023 Wenjun Peng, Jingwei Yi, Fangzhao Wu, Shangxi Wu, Bin Zhu, Lingjuan Lyu, Binxing Jiao, Tong Xu, Guangzhong Sun, Xing Xie

Companies have begun to offer Embedding as a Service (EaaS) based on these LLMs, which can benefit various natural language processing (NLP) tasks for customers.

Model extraction

Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher

1 code implementation4 Apr 2023 Jiawei Shao, Fangzhao Wu, Jun Zhang

While federated learning is promising for privacy-preserving collaborative learning without revealing local data, it remains vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.

Federated Learning Knowledge Distillation +2

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

no code implementations1 Mar 2023 Shangxi Wu, Qiuyang He, Fangzhao Wu, Jitao Sang, YaoWei Wang, Changsheng Xu

In this work, we found that the backdoor attack can construct an artificial bias similar to the model bias derived in standard training.

Backdoor Attack Knowledge Distillation

Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems

1 code implementation28 Feb 2023 Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jaeboum Kim, Fangzhao Wu, Sunghun Kim

To address these issues, we propose the REMI framework, consisting of an Interest-aware Hard Negative mining strategy (IHN) and a Routing Regularization (RR) method.

Recommendation Systems

Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

1 code implementation13 Feb 2023 Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu, Gang Chen

In order to address this issue, we propose GAS, a \shorten approach that can successfully adapt existing robust AGRs to non-IID settings.

Federated Learning

Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization

no code implementations10 Nov 2022 Yueqi Xie, Weizhong Zhang, Renjie Pi, Fangzhao Wu, Qifeng Chen, Xing Xie, Sunghun Kim

Since at each round, the number of tunable parameters optimized on the server side equals the number of participating clients (thus independent of the model size), we are able to train a global model with massive parameters using only a small amount of proxy data (e. g., around one hundred samples).

Federated Learning

Federated Unlearning for On-Device Recommendation

no code implementations20 Oct 2022 Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, Hao Wang

It removes a user's contribution by rolling back and calibrating the historical parameter updates and then uses these updates to speed up federated recommender reconstruction.

Recommendation Systems

Effective and Efficient Query-aware Snippet Extraction for Web Search

1 code implementation17 Oct 2022 Jingwei Yi, Fangzhao Wu, Chuhan Wu, Xiaolong Huang, Binxing Jiao, Guangzhong Sun, Xing Xie

In this paper, we propose an effective query-aware webpage snippet extraction method named DeepQSE, aiming to select a few sentences which can best summarize the webpage content in the context of input query.

Sentence

FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning

1 code implementation7 Jun 2022 Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Zhongliang Yang, Yongfeng Huang, Xing Xie

In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data.

Fairness Privacy Preserving +1

Robust Quantity-Aware Aggregation for Federated Learning

no code implementations22 May 2022 Jingwei Yi, Fangzhao Wu, Huishuai Zhang, Bin Zhu, Tao Qi, Guangzhong Sun, Xing Xie

Federated learning (FL) enables multiple clients to collaboratively train models without sharing their local data, and becomes an important privacy-preserving machine learning framework.

Federated Learning Privacy Preserving

FedCL: Federated Contrastive Learning for Privacy-Preserving Recommendation

no code implementations21 Apr 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie

In this paper, we propose a federated contrastive learning method named FedCL for privacy-preserving recommendation, which can exploit high-quality negative samples for effective model training with privacy well protected.

Contrastive Learning Privacy Preserving

News Recommendation with Candidate-aware User Modeling

no code implementations10 Apr 2022 Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang

Existing methods for news recommendation usually model user interest from historical clicked news without the consideration of candidate news.

News Recommendation

ProFairRec: Provider Fairness-aware News Recommendation

1 code implementation10 Apr 2022 Tao Qi, Fangzhao Wu, Chuhan Wu, Peijie Sun, Le Wu, Xiting Wang, Yongfeng Huang, Xing Xie

To learn provider-fair representations from biased data, we employ provider-biased representations to inherit provider bias from data.

Fairness News Recommendation

FUM: Fine-grained and Fast User Modeling for News Recommendation

no code implementations10 Apr 2022 Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang

The core idea of FUM is to concatenate the clicked news into a long document and transform user modeling into a document modeling task with both intra-news and inter-news word-level interactions.

News Recommendation

Semi-FairVAE: Semi-supervised Fair Representation Learning with Adversarial Variational Autoencoder

no code implementations1 Apr 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

In this paper, we propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder, which can reduce the dependency of adversarial fair models on data with labeled sensitive attributes.

Attribute Fairness +1

Unified and Effective Ensemble Knowledge Distillation

no code implementations1 Apr 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

In addition, we weight the distillation loss based on the overall prediction correctness of the teacher ensemble to distill high-quality knowledge.

Knowledge Distillation Transfer Learning

End-to-end Learnable Diversity-aware News Recommendation

no code implementations1 Apr 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Different from existing news recommendation methods that are usually based on point- or pair-wise ranking, in LeaDivRec we propose a more effective list-wise news recommendation model.

News Recommendation

FairRank: Fairness-aware Single-tower Ranking Framework for News Recommendation

no code implementations1 Apr 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Since candidate news selection can be biased, we propose to use a shared candidate-aware user model to match user interest with a real displayed candidate news and a random news, respectively, to learn a candidate-aware user embedding that reflects user interest in candidate news and a candidate-invariant user embedding that indicates intrinsic user interest.

Attribute Fairness +1

Are Big Recommendation Models Fair to Cold Users?

no code implementations28 Feb 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

They are usually learned on historical user behavior data to infer user interest and predict future user behaviors (e. g., clicks).

Fairness Recommendation Systems

Quality-aware News Recommendation

no code implementations28 Feb 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

In this paper, we propose a quality-aware news recommendation method named QualityRec that can effectively improve the quality of recommended news.

News Recommendation

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

no code implementations ACL 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie

In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before fine-tuning.

No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices

no code implementations16 Feb 2022 Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie

In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.

Federated Learning Knowledge Distillation +1

UA-FedRec: Untargeted Attack on Federated News Recommendation

1 code implementation14 Feb 2022 Jingwei Yi, Fangzhao Wu, Bin Zhu, Jing Yao, Zhulin Tao, Guangzhong Sun, Xing Xie

Our study reveals a critical security issue in existing federated news recommendation systems and calls for research efforts to address the issue.

Federated Learning News Recommendation +2

FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling

no code implementations10 Feb 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie

However, existing general FL poisoning methods for degrading model performance are either ineffective or not concealed in poisoning federated recommender systems.

Federated Learning Recommendation Systems

Game of Privacy: Towards Better Federated Platform Collaboration under Privacy Restriction

no code implementations10 Feb 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yanlin Wang, Yuqing Yang, Yongfeng Huang, Xing Xie

To solve the game, we propose a platform negotiation method that simulates the bargaining among platforms and locally optimizes their policies via gradient descent.

Vertical Federated Learning

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

1 code implementation5 Dec 2021 Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.

Document Summarization Image Captioning +3

Tiny-NewsRec: Effective and Efficient PLM-based News Recommendation

1 code implementation2 Dec 2021 Yang Yu, Fangzhao Wu, Chuhan Wu, Jingwei Yi, Qi Liu

We further propose a two-stage knowledge distillation method to improve the efficiency of the large PLM-based news recommendation model while maintaining its performance.

Knowledge Distillation Natural Language Understanding +1

Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation

1 code implementation EMNLP 2021 Jingwei Yi, Fangzhao Wu, Chuhan Wu, Ruixuan Liu, Guangzhong Sun, Xing Xie

However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients.

Federated Learning News Recommendation +1

Uni-FedRec: A Unified Privacy-Preserving News Recommendation Framework for Model Training and Online Serving

no code implementations Findings (EMNLP) 2021 Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie

In this paper, we propose a unified news recommendation framework, which can utilize user data locally stored in user clients to train models and serve users in a privacy-preserving way.

News Generation News Recommendation +2

UserBERT: Contrastive User Model Pre-training

no code implementations3 Sep 2021 Chuhan Wu, Fangzhao Wu, Yang Yu, Tao Qi, Yongfeng Huang, Xing Xie

Two self-supervision tasks are incorporated in UserBERT for user model pre-training on unlabeled user behavior data to empower user modeling.

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

no code implementations30 Aug 2021 Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost.

Federated Learning Knowledge Distillation

Fastformer: Additive Attention Can Be All You Need

9 code implementations20 Aug 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie

In this way, Fastformer can achieve effective context modeling with linear complexity.

 Ranked #1 on News Recommendation on MIND (using extra training data)

News Recommendation Text Classification +1

Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer

no code implementations20 Aug 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Binxing Jiao, Daxin Jiang, Yongfeng Huang, Xing Xie

We then sample token pairs based on their probability scores derived from the sketched attention matrix to generate different sparse attention index matrices for different attention heads.

Is News Recommendation a Sequential Recommendation Task?

no code implementations20 Aug 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

News recommendation is often modeled as a sequential recommendation task, which assumes that there are rich short-term dependencies over historical clicked news.

News Recommendation Sequential Recommendation

Personalized News Recommendation: Methods and Challenges

no code implementations16 Jun 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie

Instead of following the conventional taxonomy of news recommendation methods, in this paper we propose a novel perspective to understand personalized news recommendation based on its core problems and the associated techniques and challenges.

News Recommendation Recommendation Systems

DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning

no code implementations11 Jun 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang

It is important to eliminate the effect of position biases on the recommendation model to accurately target user interests.

News Recommendation Position

HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation

no code implementations ACL 2021 Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, Yongfeng Huang

Instead of a single user embedding, in our method each user is represented in a hierarchical interest tree to better capture their diverse and multi-grained interest in news.

News Recommendation

One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers

no code implementations Findings (ACL) 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang

In addition, we propose a multi-teacher hidden loss and a multi-teacher distillation loss to transfer the useful knowledge in both hidden states and soft labels from multiple teacher PLMs to the student model.

Knowledge Distillation Language Modelling +1

Rethinking InfoNCE: How Many Negative Samples Do You Need?

no code implementations27 May 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang

We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.

Informativeness Mutual Information Estimation

Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs

no code implementations23 May 2021 Chen Chen, Xuanli He, Lingjuan Lyu, Fangzhao Wu

In this work, we bridge this gap by first presenting an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model) by only querying a limited number of queries.

Attribute Inference Attack +4

Personalized News Recommendation with Knowledge-aware Interactive Matching

1 code implementation20 Apr 2021 Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang

Our method interactively models candidate news and user interest to facilitate their accurate matching.

Knowledge Graphs News Recommendation

Empowering News Recommendation with Pre-trained Language Models

1 code implementation15 Apr 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Our PLM-empowered news recommendation models have been deployed to the Microsoft News platform, and achieved significant gains in terms of both click and pageview in both English-speaking and global markets.

Natural Language Understanding News Recommendation

DebiasedRec: Bias-aware User Modeling and Click Prediction for Personalized News Recommendation

no code implementations15 Apr 2021 Jingwei Yi, Fangzhao Wu, Chuhan Wu, Qifei Li, Guangzhong Sun, Xing Xie

The core of our method includes a bias representation module, a bias-aware user modeling module, and a bias-aware click prediction module.

News Recommendation

MM-Rec: Multimodal News Recommendation

no code implementations15 Apr 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Most of existing news representation methods learn news representations only from news texts while ignore the visual information in news like images.

News Recommendation object-detection +1

NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application

no code implementations Findings (EMNLP) 2021 Chuhan Wu, Fangzhao Wu, Yang Yu, Tao Qi, Yongfeng Huang, Qi Liu

However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which has some gaps with the news domain and may be suboptimal for news intelligence.

Knowledge Distillation Language Modelling +2

FeedRec: News Feed Recommendation with Various User Feedbacks

no code implementations9 Feb 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Besides, the feed recommendation models trained solely on click behaviors cannot optimize other objectives such as user engagement.

News Recommendation

FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation

no code implementations9 Feb 2021 Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, Xing Xie

To incorporate high-order user-item interactions, we propose a user-item graph expansion method that can find neighboring users with co-interacted items and exchange their embeddings for expanding the local user-item graphs in a privacy-preserving way.

Privacy Preserving

Neural News Recommendation with Negative Feedback

no code implementations12 Jan 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie

The dwell time of news reading is an important clue for user interest modeling, since short reading dwell time usually indicates low and even negative interest.

News Recommendation

SentiRec: Sentiment Diversity-aware Neural News Recommendation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

We learn user representations from browsed news representations, and compute click scores based on user and candidate news representations.

News Recommendation

DA-Transformer: Distance-aware Transformer

no code implementations NAACL 2021 Chuhan Wu, Fangzhao Wu, Yongfeng Huang

Since the raw weighted real distances may not be optimal for adjusting self-attention weights, we propose a learnable sigmoid function to map them into re-scaled coefficients that have proper ranges.

Improving Attention Mechanism with Query-Value Interaction

no code implementations8 Oct 2020 Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

We propose a query-value interaction function which can learn query-aware attention values, and combine them with the original values and attention weights to form the final output.

PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision

1 code implementation Findings of the Association for Computational Linguistics 2020 Chuhan Wu, Fangzhao Wu, Tao Qi, Jianxun Lian, Yongfeng Huang, Xing Xie

Motivated by pre-trained language models which are pre-trained on large-scale unlabeled corpus to empower many downstream tasks, in this paper we propose to pre-train user models from large-scale unlabeled user behaviors data.

FedCTR: Federated Native Ad CTR Prediction with Multi-Platform User Behavior Data

1 code implementation23 Jul 2020 Chuhan Wu, Fangzhao Wu, Tao Di, Yongfeng Huang, Xing Xie

On each platform a local user model is used to learn user embeddings from the local user behaviors on that platform.

Click-Through Rate Prediction Privacy Preserving

Attentive Pooling with Learnable Norms for Text Representation

no code implementations ACL 2020 Chuhan Wu, Fangzhao Wu, Tao Qi, Xiaohui Cui, Yongfeng Huang

Different from existing pooling methods that use a fixed pooling norm, we propose to learn the norm in an end-to-end manner to automatically find the optimal ones for text representation in different tasks.

Fine-grained Interest Matching for Neural News Recommendation

no code implementations ACL 2020 Heyuan Wang, Fangzhao Wu, Zheng Liu, Xing Xie

Existing studies generally represent each user as a single vector and then match the candidate news vector, which may lose fine-grained information for recommendation.

News Recommendation

FairRec: Fairness-aware News Recommendation with Decomposed Adversarial Learning

no code implementations30 Jun 2020 Chuhan Wu, Fangzhao Wu, Xiting Wang, Yongfeng Huang, Xing Xie

In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes.

Attribute Fairness +1

Graph Enhanced Representation Learning for News Recommendation

no code implementations31 Mar 2020 Suyu Ge, Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Existing news recommendation methods achieve personalization by building accurate news representations from news content and user representations from their direct interactions with news (e. g., click), while ignoring the high-order relatedness between users and news.

Graph Attention News Recommendation +1

FedNER: Privacy-preserving Medical Named Entity Recognition with Federated Learning

no code implementations20 Mar 2020 Suyu Ge, Fangzhao Wu, Chuhan Wu, Tao Qi, Yongfeng Huang, Xing Xie

Since the labeled data in different platforms usually has some differences in entity type and annotation criteria, instead of constraining different platforms to share the same model, we decompose the medical NER model in each platform into a shared module and a private module.

Federated Learning Medical Named Entity Recognition +4

Reviews Meet Graphs: Enhancing User and Item Representations for Recommendation with Hierarchical Attentive Graph Neural Network

no code implementations IJCNLP 2019 Chuhan Wu, Fangzhao Wu, Tao Qi, Suyu Ge, Yongfeng Huang, Xing Xie

In the review content-view, we propose to use a hierarchical model to first learn sentence representations from words, then learn review representations from sentences, and finally learn user/item representations from reviews.

MULTI-VIEW LEARNING Representation Learning +1

Neural News Recommendation with Heterogeneous User Behavior

no code implementations IJCNLP 2019 Chuhan Wu, Fangzhao Wu, Mingxiao An, Tao Qi, Jianqiang Huang, Yongfeng Huang, Xing Xie

In the user representation module, we propose an attentive multi-view learning framework to learn unified representations of users from their heterogeneous behaviors such as search queries, clicked news and browsed webpages.

MULTI-VIEW LEARNING News Recommendation

NPA: Neural News Recommendation with Personalized Attention

no code implementations12 Jul 2019 Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie

Since different words and different news articles may have different informativeness for representing news and users, we propose to apply both word- and news-level attention mechanism to help our model attend to important words and news articles.

Informativeness News Recommendation

Neural News Recommendation with Attentive Multi-View Learning

5 code implementations12 Jul 2019 Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie

In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.

MULTI-VIEW LEARNING News Recommendation +2

Exploring Sequence-to-Sequence Learning in Aspect Term Extraction

no code implementations ACL 2019 Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, Houfeng Wang

Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem.

Position Sentence +1

Neural News Recommendation with Long- and Short-term User Representations

1 code implementation ACL 2019 Mingxiao An, Fangzhao Wu, Chuhan Wu, Kun Zhang, Zheng Liu, Xing Xie

In this paper, we propose a neural news recommendation approach which can learn both long- and short-term user representations.

News Recommendation

Hierarchical User and Item Representation with Three-Tier Attention for Recommendation

no code implementations NAACL 2019 Chuhan Wu, Fangzhao Wu, Junxin Liu, Yongfeng Huang

In this paper, we propose a hierarchical user and item representation model with three-tier attention to learn user and item representations from reviews for recommendation.

Informativeness Recommendation Systems +1

Neural Review Rating Prediction with Hierarchical Attentions and Latent Factors

no code implementations29 May 2019 Xianchen Wang, Hongtao Liu, Peiyi Wang, Fangzhao Wu, Hongyan Xu, Wenjun Wang, Xing Xie

In this paper, we propose a hierarchical attention model fusing latent factor model for rating prediction with reviews, which can focus on important words and informative reviews.

Informativeness

NRPA: Neural Recommendation with Personalized Attention

5 code implementations29 May 2019 Hongtao Liu, Fangzhao Wu, Wenjun Wang, Xianchen Wang, Pengfei Jiao, Chuhan Wu, Xing Xie

In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews.

Informativeness News Recommendation +1

Neural Chinese Named Entity Recognition via CNN-LSTM-CRF and Joint Training with Word Segmentation

1 code implementation26 Apr 2019 Fangzhao Wu, Junxin Liu, Chuhan Wu, Yongfeng Huang, Xing Xie

Besides, the training data for CNER in many domains is usually insufficient, and annotating enough training data for CNER is very expensive and time-consuming.

Chinese Named Entity Recognition named-entity-recognition +1

Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization

no code implementations26 Apr 2019 Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie

Luckily, the unlabeled data is usually easy to collect and many high-quality Chinese lexicons are off-the-shelf, both of which can provide useful information for CWS.

Chinese Word Segmentation Segmentation

Logic Rules Powered Knowledge Graph Embedding

no code implementations9 Mar 2019 Pengwei Wang, Dejing Dou, Fangzhao Wu, Nisansa de Silva, Lianwen Jin

And then, to put both triples and mined logic rules within the same semantic space, all triples in the knowledge graph are represented as first-order logic.

Knowledge Graph Embedding Link Prediction +1

Skeptical Deep Learning with Distribution Correction

no code implementations9 Nov 2018 Mingxiao An, Yongzhou Chen, Qi Liu, Chuanren Liu, Guangyi Lv, Fangzhao Wu, Jianhui Ma

Recently deep neural networks have been successfully used for various classification tasks, especially for problems with massive perfectly labeled training data.

Classification General Classification

Detecting Tweets Mentioning Drug Name and Adverse Drug Reaction with Hierarchical Tweet Representation and Multi-Head Self-Attention

1 code implementation WS 2018 Chuhan Wu, Fangzhao Wu, Junxin Liu, Sixing Wu, Yongfeng Huang, Xing Xie

This paper describes our system for the first and third shared tasks of the third Social Media Mining for Health Applications (SMM4H) workshop, which aims to detect the tweets mentioning drug names and adverse drug reactions.

Neural Chinese Word Segmentation with Dictionary Knowledge

no code implementations11 Jul 2018 Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie

The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.

Chinese Word Segmentation Multi-Task Learning +1

Neural Metaphor Detecting with CNN-LSTM Model

no code implementations WS 2018 Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, Yongfeng Huang

In addition, we compare the performance of the softmax classifier and conditional random field (CRF) for sequential labeling in this task.

Machine Translation POS +1

THU\_NGN at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases with Deep LSTM

no code implementations IJCNLP 2017 Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Sixing Wu, Zhigang Yuan

Since the existing valence-arousal resources of Chinese are mainly in word-level and there is a lack of phrase-level ones, the Dimensional Sentiment Analysis for Chinese Phrases (DSAP) task aims to predict the valence-arousal ratings for Chinese affective words and phrases automatically.

Opinion Mining POS +2

Active Sentiment Domain Adaptation

no code implementations ACL 2017 Fangzhao Wu, Yongfeng Huang, Jun Yan

Instead of the source domain sentiment classifiers, our approach adapts the general-purpose sentiment lexicons to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode, as well as the domain-specific sentiment similarities among words mined from unlabeled samples of target domain.

Active Learning Domain Adaptation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.