Relative Positional Encoding for Speech Recognition and Direct Translation

20 May 2020Ngoc-Quan PhamThanh-Le HaTuan-Nam NguyenThai-Son NguyenElizabeth SaleskySebastian StuekerJan NiehuesAlexander Waibel

Transformer models are powerful sequence-to-sequence architectures that are capable of directly mapping speech inputs to transcriptions or translations. However, the mechanism for modeling positions in this model was tailored for text modeling, and thus is less ideal for acoustic inputs... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper