2 code implementations • 28 Sep 2023 • Yidan Fan, Yongxin Yu, Wenhuan Lu, Yahong Han
Our approach takes into account snippet-level encoded features without the supervision of pseudo labels.
no code implementations • 16 Oct 2022 • Jian Song, Di Liang, Rumei Li, Yuntao Li, Sirui Wang, Minlong Peng, Wei Wu, Yongxin Yu
Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching.
no code implementations • 20 Oct 2021 • Yanping Zhang, Yongxin Yu
Meanwhile, we proposed a 1D Identity Channel-wise Spatio-temporal Convolution(1D-ICSC) which captures the temporal relationship at channel-feature level within a controllable computation budget(by parameters G & R).