Search Results for author: Jiaao Chen

Found 27 papers, 18 papers with code

Focus on the Action: Learning to Highlight and Summarize Jointly for Email To-Do Items Summarization

no code implementations Findings (ACL) 2022 Kexun Zhang, Jiaao Chen, Diyi Yang

Automatic email to-do item generation is the task of generating to-do items from a given email to help people overview emails and schedule daily work.

Simple Conversational Data Augmentation for Semi-supervised Abstractive Dialogue Summarization

1 code implementation EMNLP 2021 Jiaao Chen, Diyi Yang

Abstractive conversation summarization has received growing attention while most current state-of-the-art summarization models heavily rely on human-annotated summaries.

Abstractive Dialogue Summarization Data Augmentation

Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach

no code implementations16 Nov 2023 Yanchen Liu, Mingyu Derek Ma, Wenna Qin, Azure Zhou, Jiaao Chen, Weiyan Shi, Wei Wang, Diyi Yang

Using COVID-19 as a testbed domain, our experiments demonstrate a significant alignment between the susceptibility scores estimated by our computational modeling and human judgments, confirming the effectiveness of this latent modeling approach.

Misinformation

Unlearn What You Want to Forget: Efficient Unlearning for LLMs

1 code implementation31 Oct 2023 Jiaao Chen, Diyi Yang

Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data, however, this process might suffer from privacy issues and violations of data protection regulations.

DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks

1 code implementation29 Sep 2023 Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie

Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks.

Logical Reasoning

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

no code implementations1 Aug 2023 Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen

Compositional generalization empowers the LLMs to solve problems that are harder than the ones they have seen (i. e., easy-to-hard generalization), which is a critical reasoning capability of human-like intelligence.

Math Math Word Problem Solving

Can Large Language Models Transform Computational Social Science?

1 code implementation12 Apr 2023 Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang

We conclude that the performance of today's LLMs can augment the CSS research pipeline in two ways: (1) serving as zero-shot data annotators on human annotation teams, and (2) bootstrapping challenging creative generation tasks (e. g., explaining the underlying attributes of a text).

Persuasiveness

A Cheaper and Better Diffusion Language Model with Soft-Masked Noise

1 code implementation10 Apr 2023 Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang

Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation.

Denoising Image Generation +1

Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

1 code implementation8 Feb 2023 Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang

Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i. e., without adaptation on downstream data.

Arithmetic Reasoning Zero-Shot Learning

Parameter-Efficient Fine-Tuning Design Spaces

no code implementations4 Jan 2023 Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang

We discover the following design patterns: (i) group layers in a spindle pattern; (ii) allocate the number of trainable parameters to layers uniformly; (iii) tune all the groups; (iv) assign proper tuning strategies to different groups.

Human-in-the-loop Abstractive Dialogue Summarization

no code implementations19 Dec 2022 Jiaao Chen, Mohan Dodda, Diyi Yang

Specifically, we ask humans to highlight the salient information to be included in summaries to provide the local feedback , and to make overall comparisons among summaries in terms of coherence, accuracy, coverage, concise and overall quality, as the global feedback.

Abstractive Dialogue Summarization

WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model for Financial Domain

no code implementations31 Oct 2022 Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang

To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain.

FLUE Language Modelling

VALUE: Understanding Dialect Disparity in NLU

1 code implementation ACL 2022 Caleb Ziems, Jiaao Chen, Camille Harris, Jessica Anderson, Diyi Yang

To understand disparities in current models and to facilitate more dialect-competent NLU systems, we introduce the VernAcular Language Understanding Evaluation (VALUE) benchmark, a challenging variant of GLUE that we created with a set of lexical and morphosyntactic transformation rules.

Linguistic Acceptability Natural Language Understanding

An Empirical Survey of Data Augmentation for Limited Data Learning in NLP

no code implementations14 Jun 2021 Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, Diyi Yang

NLP has achieved great progress in the past decade through the use of neural models and large labeled datasets.

Data Augmentation News Classification +1

Examining the Ordering of Rhetorical Strategies in Persuasive Requests

1 code implementation Findings of the Association for Computational Linguistics 2020 Omar Shaikh, Jiaao Chen, Jon Saad-Falcon, Duen Horng Chau, Diyi Yang

We find that specific (orderings of) strategies interact uniquely with a request's content to impact success rate, and thus the persuasiveness of a request.

Persuasiveness

Local Additivity Based Data Augmentation for Semi-supervised NER

1 code implementation EMNLP 2020 Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang

Named Entity Recognition (NER) is one of the first stages in deep language understanding yet current NER models heavily rely on human-annotated data.

Data Augmentation named-entity-recognition +3

MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification

2 code implementations ACL 2020 Jiaao Chen, Zichao Yang, Diyi Yang

This paper presents MixText, a semi-supervised learning method for text classification, which uses our newly designed data augmentation method called TMix.

Data Augmentation General Classification +1

Semi-Supervised Models via Data Augmentationfor Classifying Interactive Affective Responses

1 code implementation23 Apr 2020 Jiaao Chen, Yuwei Wu, Diyi Yang

We present semi-supervised models with data augmentation (SMDA), a semi-supervised text classification system to classify interactive affective responses.

Data Augmentation Semi-Supervised Text Classification +2

Let's Make Your Request More Persuasive: Modeling Persuasive Strategies via Semi-Supervised Neural Nets on Crowdfunding Platforms

no code implementations NAACL 2019 Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, Eduard Hovy

Modeling what makes a request persuasive - eliciting the desired response from a reader - is critical to the study of propaganda, behavioral economics, and advertising.

Persuasiveness Sentence

Incorporating Structured Commonsense Knowledge in Story Completion

no code implementations1 Nov 2018 Jiaao Chen, Jianshu Chen, Zhou Yu

The ability to select an appropriate story ending is the first step towards perfect narrative comprehension.

Story Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.