no code implementations • 2 May 2021 • Chang Cui, Jinzhu Jia, Yijun Xiao, Huiming Zhang
Using the debiased estimator, we establish multiple testing procedures.
no code implementations • EACL 2021 • Yijun Xiao, William Yang Wang
Despite improvements in performances on different natural language generation tasks, deep neural models are prone to hallucinating facts that are incorrect or nonexistent.
no code implementations • 24 Dec 2020 • Xing Shi, Yijun Xiao, Kevin Knight
Using different EoS types in target sentences of different lengths exposes and eliminates this implicit smoothing.
no code implementations • 30 Dec 2019 • Yijun Xiao, William Yang Wang
However, Kullback-Leibler (KL) divergence-based total correlation is metric-agnostic and sensitive to data samples.
no code implementations • 27 Aug 2019 • Yijun Xiao, William Yang Wang
We propose syntax-aware variational autoencoders (SAVAEs) that dedicate a subspace in the latent dimensions dubbed syntactic latent to represent syntactic structures of sentences.
no code implementations • 18 Nov 2018 • Yijun Xiao, William Yang Wang
Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems.
no code implementations • 1 Nov 2018 • Deren Lei, Zichen Sun, Yijun Xiao, William Yang Wang
To bridge this gap, we study the role of SGD implicit regularization in deep learning systems.
no code implementations • 31 Oct 2018 • Yijun Xiao, Tiancheng Zhao, William Yang Wang
We introduce an improved variational autoencoder (VAE) for text modeling with topic information explicitly modeled as a Dirichlet latent variable.
no code implementations • 30 Apr 2017 • Ganbin Zhou, Ping Luo, Rongyu Cao, Yijun Xiao, Fen Lin, Bo Chen, Qing He
Then, with a proposed tree-structured search method, the model is able to generate the most probable responses in the form of dependency trees, which are finally flattened into sequences as the system output.
no code implementations • 1 Feb 2016 • Yijun Xiao, Kyunghyun Cho
Document classification tasks were primarily tackled at word level.