no code implementations • ICML 2020 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
Interpretability is important in text generation for guiding the generation with interpretable attributes.
1 code implementation • 1 Aug 2023 • Ning Miao, Yee Whye Teh, Tom Rainforth
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning.
no code implementations • 4 Apr 2023 • Jialin Liu, Ning Miao, Chongzhou Fang, Houman Homayoun, Han Wang
In particular, we first identify the vulnerability of DTW for ECG classification, i. e., the correlation between warping path choice and prediction results.
1 code implementation • 31 May 2022 • Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim
We introduce InstaAug, a method for automatically learning input-specific augmentations from data.
1 code implementation • ICLR 2022 • Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, Tom Rainforth
InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es).
1 code implementation • ACL 2020 • Ning Miao, Yuxuan Song, Hao Zhou, Lei LI
It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data.
no code implementations • ACL 2019 • Huangzhao Zhang, Hao Zhou, Ning Miao, Lei LI
Efficiently building an adversarial attacker for natural language processing (NLP) tasks is a real challenge.
no code implementations • 12 Jul 2020 • Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei LI
Auto-regressive sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios.
1 code implementation • NeurIPS 2019 • Ning Miao, Hao Zhou, Chengqi Zhao, Wenxian Shi, Lei LI
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase.
1 code implementation • 16 Jun 2019 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
To enhance the controllability and interpretability, one can replace the Gaussian prior with a mixture of Gaussian distributions (GM-VAE), whose mixture components could be related to hidden semantic aspects of data.
1 code implementation • 14 Nov 2018 • Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei LI
In real-world applications of natural language generation, there are often constraints on the target sentences in addition to fluency and naturalness requirements.
no code implementations • ICLR 2018 • Ning Miao, Hengliang Wang, Ran Le, Chongyang Tao, Mingyue Shang, Rui Yan, Dongyan Zhao
Traditional recurrent neural network (RNN) or convolutional neural net- work (CNN) based sequence-to-sequence model can not handle tree structural data well.