no code implementations • IWSLT (EMNLP) 2018 • Yuguang Wang, Liangliang Shi, Linyu Wei, Weifeng Zhu, Jinkun Chen, Zhichao Wang, Shixue Wen, Wei Chen, Yanfeng Wang, Jia Jia
Our final average result on speech translation is 31. 02 BLEU.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • ECCV 2020 • Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha
This paper proposes new ways of sample mixing by thinking of the process as generation of barycenter in a metric space for data augmentation.
no code implementations • 21 Jan 2024 • Liangliang Shi, Zhaoqi Shen, Junchi Yan
Even with vanilla Softmax trained features, our extensive experimental results show that our method can achieve good results with our improved inference scheme in the testing stage.
no code implementations • 29 Sep 2021 • Liangliang Shi, Fangyu Ding, Junchi Yan, Yanjie Duan, Guangjian Tian
Despite the fast advance in neural temporal point processes (NTPP) which enjoys high model capacity, there are still some standing gaps to fill including model expressiveness, predictability, and interpretability, especially with the wide application of event sequence modeling.
no code implementations • 29 Sep 2021 • Yang Li, Yichuan Mo, Liangliang Shi, Junchi Yan, Xiaolu Zhang, Jun Zhou
Although many efforts have been made in terms of backbone architecture design, loss function, and training techniques, few results have been obtained on how the sampling in latent space can affect the final performance, and existing works on latent space mainly focus on controllability.
no code implementations • 29 Sep 2021 • Qibing Ren, Liangliang Shi, Lanjun Wang, Junchi Yan
We first show both theoretically and empirically that strong smoothing in AT increases local smoothness of the loss surface which is beneficial for robustness but sacrifices the training loss which influences the accuracy of samples near the decision boundary.
no code implementations • 1 Jun 2021 • Yang Li, Liangliang Shi, Junchi Yan
Based on this observation, considering a necessary condition of IID generation that the inverse samples from target data should also be IID in the source distribution, we propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution.
no code implementations • 1 Jan 2021 • Liangliang Shi, Yang Li, Junchi Yan
Generative adversarial networks have shown their ability in capturing high-dimensional complex distributions and generating realistic data samples e. g. images.