Search Results for author: Yucen Luo

Found 13 papers, 8 papers with code

Iterative Teaching by Data Hallucination

1 code implementation31 Oct 2022 Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf

We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.

Hallucination

Spectral Representation Learning for Conditional Moment Models

no code implementations29 Oct 2022 Ziyu Wang, Yucen Luo, Yueru Li, Jun Zhu, Bernhard Schölkopf

For nonparametric conditional moment models, efficient estimation often relies on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used.

Causal Inference Representation Learning

Learning Counterfactually Invariant Predictors

1 code implementation20 Jul 2022 Francesco Quinzan, Cecilia Casolo, Krikamol Muandet, Yucen Luo, Niki Kilbertus

Notions of counterfactual invariance (CI) have proven essential for predictors that are fair, robust, and generalizable in the real world.

counterfactual Object Recognition

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models

no code implementations ICLR 2020 Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen

Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.

Measuring Uncertainty through Bayesian Learning of Deep Neural Network Structure

1 code implementation22 Nov 2019 Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang

Bayesian neural networks (BNNs) augment deep networks with uncertainty quantification by Bayesian treatment of the network weights.

Bayesian Inference Neural Architecture Search +2

Deep Bayesian Structure Networks

1 code implementation25 Sep 2019 Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang

Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights.

Bayesian Inference Neural Architecture Search +1

A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels

no code implementations20 Sep 2019 Yucen Luo, Jun Zhu, Tomas Pfister

Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance.

Learning with noisy labels

Cluster Alignment with a Teacher for Unsupervised Domain Adaptation

1 code implementation ICCV 2019 Zhijie Deng, Yucen Luo, Jun Zhu

Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.

Clustering Unsupervised Domain Adaptation

Semi-crowdsourced Clustering with Deep Generative Models

1 code implementation NeurIPS 2018 Yucen Luo, Tian Tian, Jiaxin Shi, Jun Zhu, Bo Zhang

We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset.

Clustering Variational Inference

Smooth Neighbors on Teacher Graphs for Semi-supervised Learning

1 code implementation CVPR 2018 Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, Bo Zhang

In SNTG, a graph is constructed based on the predictions of the teacher model, i. e., the implicit self-ensemble of models.

ZhuSuan: A Library for Bayesian Deep Learning

1 code implementation18 Sep 2017 Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou

In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning.

Probabilistic Programming regression

Conditional Generative Moment-Matching Networks

no code implementations NeurIPS 2016 Yong Ren, Jialian Li, Yucen Luo, Jun Zhu

Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding.

Cannot find the paper you are looking for? You can Submit a new open access paper.