Search Results for author: Shuning Jin

Found 5 papers, 3 papers with code

Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock Humorous Headlines

1 code implementation SEMEVAL 2020 Shuning Jin, Yue Yin, XianE Tang, Ted Pedersen

We use pretrained transformer-based language models in SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines.

Discrete Latent Variable Representations for Low-Resource Text Classification

1 code implementation ACL 2020 Shuning Jin, Sam Wiseman, Karl Stratos, Karen Livescu

While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient.

General Classification Sentence +2

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.