2 code implementations • ICML 2018 • Yunchen Pu, Shuyang Dai, Zhe Gan, Wei-Yao Wang, Guoyin Wang, Yizhe Zhang, Ricardo Henao, Lawrence Carin
Distinct from most existing approaches, that only learn conditional distributions, the proposed model aims to learn a joint distribution of multiple random variables (domains).
no code implementations • 15 Nov 2017 • Wenlin Wang, Yunchen Pu, Vinay Kumar Verma, Kai Fan, Yizhe Zhang, Changyou Chen, Piyush Rai, Lawrence Carin
We present a deep generative model for learning to predict classes not seen at training time.
no code implementations • NeurIPS 2017 • Yunchen Pu, Wei-Yao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, Lawrence Carin
A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ($i$) from observed data fed through the encoder to yield codes, and ($ii$) from latent codes drawn from a simple prior and propagated through the decoder to manifest data.
1 code implementation • NeurIPS 2017 • Zhe Gan, Liqun Chen, Wei-Yao Wang, Yunchen Pu, Yizhe Zhang, Hao liu, Chunyuan Li, Lawrence Carin
The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs.
2 code implementations • 6 Sep 2017 • Liqun Chen, Shuyang Dai, Yunchen Pu, Chunyuan Li, Qinliang Su, Lawrence Carin
A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence.
5 code implementations • NeurIPS 2017 • Chunyuan Li, Hao liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, Lawrence Carin
We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching.
no code implementations • ICML 2018 • Changyou Chen, Chunyuan Li, Liqun Chen, Wenlin Wang, Yunchen Pu, Lawrence Carin
Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees.
no code implementations • NeurIPS 2017 • Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, Lawrence Carin
A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent.
no code implementations • 11 Jan 2017 • Xin Yuan, Yunchen Pu, Lawrence Carin
During reconstruction and testing, we project the upper layer dictionary to the data level and only a single layer deconvolution is required.
1 code implementation • 13 Dec 2016 • Yin Xian, Yunchen Pu, Zhe Gan, Liang Lu, Andrew Thompson
Its output feature is related to Cohen's class of time-frequency distributions.
Sound
no code implementations • 8 Dec 2016 • Andrew Stevens, Yunchen Pu, Yannan Sun, Greg Spell, Lawrence Carin
A multi-way factor analysis model is introduced for tensor-variate data of any order.
no code implementations • ACL 2017 • Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin
Recurrent neural networks (RNNs) have shown promising performance for language modeling.
no code implementations • EMNLP 2017 • Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, Lawrence Carin
We propose a new encoder-decoder approach to learn distributed sentence representations that are applicable to multiple purposes.
1 code implementation • CVPR 2017 • Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, Li Deng
The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag.
no code implementations • 23 Nov 2016 • Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin
Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features.
no code implementations • NeurIPS 2016 • Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, Lawrence Carin
A novel variational autoencoder is developed to model images, as well as associated labels or captions.
no code implementations • CVPR 2016 • Chunyuan Li, Andrew Stevens, Changyou Chen, Yunchen Pu, Zhe Gan, Lawrence Carin
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision.
no code implementations • 23 Dec 2015 • Yunchen Pu, Xin Yuan, Andrew Stevens, Chunyuan Li, Lawrence Carin
A deep generative model is developed for representation and analysis of images, based on a hierarchical convolutional dictionary-learning framework.
no code implementations • 15 Apr 2015 • Yunchen Pu, Xin Yuan, Lawrence Carin
A generative model is developed for deep (multi-layered) convolutional dictionary learning.
no code implementations • 18 Dec 2014 • Yunchen Pu, Xin Yuan, Lawrence Carin
A generative Bayesian model is developed for deep (multi-layer) convolutional dictionary learning.