no code implementations • 8 Jul 2022 • Yulai Cong, Miaoyun Zhao
Recent advances in big/foundation models reveal a promising path for deep learning, where the roadmap steadily moves from big data to big models to (the newly-introduced) big learning.
no code implementations • 13 Jul 2020 • Miaoyun Zhao, Yulai Cong, Shuyang Dai, Lawrence Carin
Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary.
1 code implementation • 16 Jun 2020 • Yulai Cong, Miaoyun Zhao, Jianqiao Li, Junya Chen, Lawrence Carin
An unbiased low-variance gradient estimator, termed GO gradient, was proposed recently for expectation-based objectives $\mathbb{E}_{q_{\boldsymbol{\gamma}}(\boldsymbol{y})} [f(\boldsymbol{y})]$, where the random variable (RV) $\boldsymbol{y}$ may be drawn from a stochastic computation graph with continuous (non-reparameterizable) internal nodes and continuous/discrete leaves.
1 code implementation • NeurIPS 2020 • Yulai Cong, Miaoyun Zhao, Jianqiao Li, Sijia Wang, Lawrence Carin
As a fundamental issue in lifelong learning, catastrophic forgetting is directly caused by inaccessible historical data; accordingly, if the data (information) were memorized perfectly, no forgetting should be expected.
1 code implementation • ICML 2020 • Miaoyun Zhao, Yulai Cong, Lawrence Carin
Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data.
1 code implementation • ICLR 2019 • Yulai Cong, Miaoyun Zhao, Ke Bai, Lawrence Carin
Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters $\gammav$ for expectation-based objectives $\Ebb_{q_{\gammav} (\yv)} [f(\yv)]$.