no code implementations • 22 Feb 2024 • Junwei Pan, Wei Xue, Ximei Wang, Haibin Yu, Xun Liu, Shijie Quan, Xueming Qiu, Dapeng Liu, Lei Xiao, Jie Jiang
In this paper, we present an industry ad recommendation system, paying attention to the challenges and practices of learning appropriate representations.
no code implementations • 20 Mar 2023 • Haibin Yu, Yuxuan Hu, Yao Qian, Ma Jin, Linquan Liu, Shujie Liu, Yu Shi, Yanmin Qian, Edward Lin, Michael Zeng
Code-switching speech refers to a means of expression by mixing two or more languages within a single utterance.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 28 Nov 2022 • Enneng Yang, Junwei Pan, Ximei Wang, Haibin Yu, Li Shen, Xihua Chen, Lei Xiao, Jie Jiang, Guibing Guo
In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter.
1 code implementation • 14 Jun 2022 • Zhongxiang Dai, Yizhou Chen, Haibin Yu, Bryan Kian Hsiang Low, Patrick Jaillet
We prove that both algorithms are asymptotically no-regret even when some or all previous tasks are dissimilar to the current task, and show that RM-GP-UCB enjoys a better theoretical robustness than RM-GP-TS.
no code implementations • 17 Apr 2021 • Haibin Yu, Dapeng Liu, Yizhou Chen, Bryan Kian Hsiang Low, Patrick Jaillet
Deep Gaussian processes (DGPs), a hierarchical composition of GP models, have successfully boosted the expressive power of their single-layer counterpart.
1 code implementation • NeurIPS 2019 • Haibin Yu, Yizhou Chen, Zhongxiang Dai, Kian Hsiang Low, Patrick Jaillet
This paper presents an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency.
no code implementations • 1 Nov 2017 • Haibin Yu, Trong Nghia Hoang, Kian Hsiang Low, Patrick Jaillet
This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the observation noises.