ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken Language Understanding

23 Oct 2020 Minjeong Kim Gyuwan Kim Sang-Woo Lee Jung-Woo Ha

Language model pre-training has shown promising results in various downstream tasks. In this context, we introduce a cross-modal pre-trained language model, called Speech-Text BERT (ST-BERT), to tackle end-to-end spoken language understanding (E2E SLU) tasks... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper