Search Results for author: Andrew Hoang

Found 1 papers, 1 papers with code

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

2 code implementations1 Jun 2019 Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, Yejin Choi

Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks.

Abstractive Text Summarization Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.