no code implementations • Findings (ACL) 2022 • Xianghong Fang, Jian Li, Lifeng Shang, Xin Jiang, Qun Liu, Dit-yan Yeung
While variational autoencoders (VAEs) have been widely applied in text generation tasks, they are troubled by two challenges: insufficient representation capacity and poor controllability.
1 code implementation • 1 Mar 2024 • Xianghong Fang, Jian Li, Qiang Sun, Benyou Wang
Uniformity plays a crucial role in the assessment of learned representations, contributing to a deeper comprehension of self-supervised learning.
1 code implementation • 16 Jun 2021 • Xianghong Fang, Haoli Bai, Jian Li, Zenglin Xu, Michael Lyu, Irwin King
We further design discrete latent space for the variational attention and mathematically show that our model is free from posterior collapse.
no code implementations • 21 Apr 2020 • Xianghong Fang, Haoli Bai, Zenglin Xu, Michael Lyu, Irwin King
Variational autoencoders have been widely applied for natural language generation, however, there are two long-standing problems: information under-representation and posterior collapse.
no code implementations • 30 Dec 2018 • Xianghong Fang, Haoli Bai, Ziyi Guo, Bin Shen, Steven Hoi, Zenglin Xu
In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of Deep Neural Networks to tackle cross-domain image classification tasks.