Search Results for author: Bryan McCann

Found 20 papers, 13 papers with code

The Thieves on Sesame Street are Polyglots - Extracting Multilingual Models from Monolingual APIs

no code implementations EMNLP 2020 Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher

Pre-training in natural language processing makes it easier for an adversary with only query access to a victim model to reconstruct a local copy of the victim by training with gibberish input data paired with the victim{'}s labels for that data.

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

1 code implementation EACL 2021 Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl

In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e. g., Roberta) for natural language understanding (NLU) tasks.

Language Modelling Natural Language Understanding

CTRLsum: Towards Generic Controllable Text Summarization

1 code implementation8 Dec 2020 Junxian He, Wojciech Kryściński, Bryan McCann, Nazneen Rajani, Caiming Xiong

Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system through textual input in the form of a set of keywords or descriptive prompts.

Descriptive Reading Comprehension +1

What's New? Summarizing Contributions in Scientific Literature

no code implementations6 Nov 2020 Hiroaki Hayashi, Wojciech Kryściński, Bryan McCann, Nazneen Rajani, Caiming Xiong

To overcome this problem, we introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work, making it easier to identify the key findings shared in articles.

Disentanglement

Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality

no code implementations Findings (EMNLP) 2021 Gustavo Aguilar, Bryan McCann, Tong Niu, Nazneen Rajani, Nitish Keskar, Thamar Solorio

To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT.

GeDi: Generative Discriminator Guided Sequence Generation

3 code implementations Findings (EMNLP) 2021 Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, Nazneen Fatema Rajani

While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate.

Attribute Linguistic Acceptability +1

SummEval: Re-evaluating Summarization Evaluation

5 code implementations24 Jul 2020 Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, Dragomir Radev

The scarcity of comprehensive up-to-date studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress.

Text Summarization

Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

1 code implementation ACL 2020 Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan McCann, Vicente Ordonez, Caiming Xiong

Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models.

Word Embeddings

ProGen: Language Modeling for Protein Generation

2 code implementations8 Mar 2020 Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R. Eguchi, Po-Ssu Huang, Richard Socher

Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science.

Language Modelling

BERT is Not an Interlingua and the Bias of Tokenization

1 code implementation WS 2019 Jasdeep Singh, Bryan McCann, Richard Socher, Caiming Xiong

Multilingual transfer learning can benefit both high- and low-resource languages, but the source of these improvements is not well understood.

Clustering Transfer Learning

Evaluating the Factual Consistency of Abstractive Text Summarization

4 code implementations EMNLP 2020 Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher

Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents.

Abstractive Text Summarization Fact Checking +2

CTRL: A Conditional Transformer Language Model for Controllable Generation

7 code implementations Preprint 2019 Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, Richard Socher

Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text.

Language Modelling Text Generation

Neural Text Summarization: A Critical Evaluation

no code implementations IJCNLP 2019 Wojciech Kryściński, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher

Text summarization aims at compressing long documents into a shorter form that conveys the most important parts of the original document.

Text Summarization

Explain Yourself! Leveraging Language Models for Commonsense Reasoning

1 code implementation ACL 2019 Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher

Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input.

Common Sense Reasoning World Knowledge

XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering

no code implementations ICLR 2020 Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.

Cross-Lingual Natural Language Inference Data Augmentation +3

Unifying Question Answering, Text Classification, and Regression via Span Extraction

no code implementations19 Apr 2019 Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher

Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering, text classification, and regression models are significantly different.

General Classification Multi-Task Learning +4

The Natural Language Decathlon: Multitask Learning as Question Answering

5 code implementations ICLR 2019 Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.

Domain Adaptation Machine Translation +11

Revisiting Activation Regularization for Language RNNs

no code implementations3 Aug 2017 Stephen Merity, Bryan McCann, Richard Socher

Both of these techniques require minimal modification to existing RNN architectures and result in performance improvements comparable or superior to more complicated regularization techniques or custom cell architectures.

L2 Regularization Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.