Search Results for author: Miaoyun Zhao

Found 6 papers, 4 papers with code

Big Learning

no code implementations8 Jul 2022 Yulai Cong, Miaoyun Zhao

Recent advances in big/foundation models reveal a promising path for deep learning, where the roadmap steadily moves from big data to big models to (the newly-introduced) big learning.

BIG-bench Machine Learning Self-Learning

Bridging Maximum Likelihood and Adversarial Learning via $α$-Divergence

no code implementations13 Jul 2020 Miaoyun Zhao, Yulai Cong, Shuyang Dai, Lawrence Carin

Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary.

GO Hessian for Expectation-Based Objectives

1 code implementation16 Jun 2020 Yulai Cong, Miaoyun Zhao, Jianqiao Li, Junya Chen, Lawrence Carin

An unbiased low-variance gradient estimator, termed GO gradient, was proposed recently for expectation-based objectives $\mathbb{E}_{q_{\boldsymbol{\gamma}}(\boldsymbol{y})} [f(\boldsymbol{y})]$, where the random variable (RV) $\boldsymbol{y}$ may be drawn from a stochastic computation graph with continuous (non-reparameterizable) internal nodes and continuous/discrete leaves.

GAN Memory with No Forgetting

1 code implementation NeurIPS 2020 Yulai Cong, Miaoyun Zhao, Jianqiao Li, Sijia Wang, Lawrence Carin

As a fundamental issue in lifelong learning, catastrophic forgetting is directly caused by inaccessible historical data; accordingly, if the data (information) were memorized perfectly, no forgetting should be expected.

On Leveraging Pretrained GANs for Generation with Limited Data

1 code implementation ICML 2020 Miaoyun Zhao, Yulai Cong, Lawrence Carin

Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data.

Image Generation Transfer Learning

GO Gradient for Expectation-Based Objectives

1 code implementation ICLR 2019 Yulai Cong, Miaoyun Zhao, Ke Bai, Lawrence Carin

Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters $\gammav$ for expectation-based objectives $\Ebb_{q_{\gammav} (\yv)} [f(\yv)]$.

Cannot find the paper you are looking for? You can Submit a new open access paper.