MCSAE: Masked Cross Self-Attentive Encoding for Speaker Embedding

28 Jan 2020  ·  Soonshin Seo, Ji-Hwan Kim ·

In general, a self-attention mechanism has been applied for speaker embedding encoding. Previous studies focused on training the self-attention in a high-level layer, such as the last pooling layer. However, the effect of low-level features was reduced in the speaker embedding encoding. Therefore, we propose masked cross self-attentive encoding (MCSAE) using ResNet. It focuses on the features of both high-level and lowlevel layers. Based on multi-layer aggregation, the output features of each residual layer are used for the MCSAE. In the MCSAE, cross self-attention module is trained the interdependence of each input features. A random masking regularization module also applied to preventing overfitting problem. As such, the MCSAE enhances the weight of frames representing the speaker information. Then, the output features are concatenated and encoded to the speaker embedding. Therefore, a more informative speaker embedding is encoded by using the MCSAE. The experimental results showed an equal error rate of 2.63% and a minimum detection cost function of 0.1453 using the VoxCeleb1 evaluation dataset. These were improved performances compared with the previous self-attentive encoding and state-of-the-art encoding methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods