no code implementations • 27 May 2023 • Linhao Dong, Zhecheng An, Peihao Wu, Jun Zhang, Lu Lu, Zejun Ma
We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training.
no code implementations • Findings (NAACL) 2022 • Yu Lin, Zhecheng An, Peihao Wu, Zejun Ma
To tackle this issue, we propose an auxiliary gloss regularizer module to BERT pre-training (GR-BERT), to enhance word semantic similarity.
no code implementations • 30 Sep 2019 • Zhecheng An, Sicong Liu
We propose a multi-class extension to the Wasserstein GAN, which allows our generative model to learn from both positive and negative samples.