Integrating Multimodal Information in Large Pretrained Transformers

ACL 2020 Wasifur RahmanMd. Kamrul HasanSangwu LeeAmir ZadehChengfeng MaoLouis-Philippe MorencyEhsan Hoque

Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream... (read more)

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper