Search Results for author: Dianqi Li

Found 11 papers, 6 papers with code

TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack

1 code implementation27 Oct 2022 Yu Cao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan, DaCheng Tao

We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers.

Adversarial Attack Question Answering +1

Phrase-level Textual Adversarial Attack with Label Preservation

1 code implementation Findings (NAACL) 2022 Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy

Generating high-quality textual adversarial examples is critical for investigating the pitfalls of natural language processing (NLP) models and further promoting their robustness.

Adversarial Attack Sentence

Towards Robust and Efficient Contrastive Textual Representation Learning

no code implementations1 Jan 2021 Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin

There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence.

Contrastive Learning Representation Learning

Contextualized Perturbation for Textual Adversarial Attack

1 code implementation NAACL 2021 Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan

Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.

Adversarial Attack Language Modelling

A Mixture of h - 1 Heads is Better than h Heads

no code implementations ACL 2020 Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith

Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.

Language Modelling Machine Translation +1

A Mixture of $h-1$ Heads is Better than $h$ Heads

no code implementations13 May 2020 Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith

Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.

Language Modelling Machine Translation +1

Contextual Text Style Transfer

no code implementations Findings of the Association for Computational Linguistics 2020 Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, Jingjing Liu

To realize high-quality style transfer with natural context preservation, we propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.

Sentence Style Transfer +2

Toward Interpretability of Dual-Encoder Models for Dialogue Response Suggestions

no code implementations2 Mar 2020 Yitong Li, Dianqi Li, Sushant Prakash, Peng Wang

To improve the interpretability in the dual encoder models, we design a novel regularization loss to minimize the mutual information between unimportant words and desired labels, in addition to the original attention method, so that important words are emphasized while unimportant words are de-emphasized.

Word Embeddings

Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning

1 code implementation3 Apr 2018 Dianqi Li, Qiuyuan Huang, Xiaodong He, Lei Zhang, Ming-Ting Sun

By contrasting with human-written captions and image-mismatched captions, the caption generator effectively exploits the inherent characteristics of human languages, and generates more discriminative captions.

Generative Adversarial Network

Adversarial Ranking for Language Generation

1 code implementation NeurIPS 2017 Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, Ming-Ting Sun

Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group.

Generative Adversarial Network Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.