Complex Transformer: A Framework for Modeling Complex-Valued Sequence

22 Oct 2019  ·  Muqiao Yang, Martin Q. Ma, Dongyu Li, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov ·

While deep learning has received a surge of interest in a variety of fields in recent years, major deep learning models barely use complex numbers. However, speech, signal and audio data are naturally complex-valued after Fourier Transform, and studies have shown a potentially richer representation of complex nets. In this paper, we propose a Complex Transformer, which incorporates the transformer model as a backbone for sequence modeling; we also develop attention and encoder-decoder network operating for complex input. The model achieves state-of-the-art performance on the MusicNet dataset and an In-phase Quadrature (IQ) signal dataset.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Music Transcription MusicNet Complex Transformer APS 74.22 # 2
Number of params 11.61M # 6
Music Transcription MusicNet Concatenated Transformer APS 71.3 # 4
Number of params 9.79M # 4

Methods