1 code implementation • 27 Mar 2024 • Yaxin Fang, Faming Liang
In such datasets, the data dimension can be extremely high, and the underlying data generation process can be unknown and highly nonlinear.
no code implementations • 19 Mar 2024 • Frank Shih, Faming Liang
Reinforcement learning (RL) tackles sequential decision-making problems by creating agents that interacts with their environment.
1 code implementation • 23 Jun 2023 • Sehwan Kim, Qifan Song, Faming Liang
In the new formulation, the discriminator converges to a fixed point while the generator converges to a distribution at the Nash equilibrium.
no code implementations • 20 Nov 2022 • Wei Deng, Qian Zhang, Qi Feng, Faming Liang, Guang Lin
Notably, in big data scenarios, we obtain an appealing communication cost $O(P\log P)$ based on the optimal window size.
no code implementations • 9 Oct 2022 • Siqi Liang, Yan Sun, Faming Liang
Sufficient dimension reduction is a powerful tool to extract core information hidden in the high-dimensional data and has potentially many important applications in machine learning tasks.
1 code implementation • ICLR 2022 • Wei Deng, Siqi Liang, Botao Hao, Guang Lin, Faming Liang
We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions.
1 code implementation • 14 Jan 2022 • Yan Sun, Faming Liang
The deep neural network suffers from many fundamental issues in machine learning.
1 code implementation • NeurIPS 2021 • Yan Sun, Wenjun Xiong, Faming Liang
Deep learning has powered recent successes of artificial intelligence (AI).
no code implementations • 29 Sep 2021 • Wei Deng, Qian Zhang, Qi Feng, Faming Liang, Guang Lin
Parallel tempering (PT), also known as replica exchange, is the go-to workhorse for simulations of multi-modal distributions.
1 code implementation • 25 Feb 2021 • Yan Sun, Qifan Song, Faming Liang
Deep learning has been the engine powering many successes of data science.
no code implementations • 17 Dec 2020 • Fabrizio Cicala, Weicheng Wang, Tianhao Wang, Ninghui Li, Elisa Bertino, Faming Liang, Yang Yang
Many proximity-based tracing (PCT) protocols have been proposed and deployed to combat the spreading of COVID-19.
Computers and Society C.3; H.4; J.3; J.7; K.4; K.6.5
2 code implementations • NeurIPS 2020 • Wei Deng, Guang Lin, Faming Liang
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics.
1 code implementation • ICLR 2021 • Wei Deng, Qi Feng, Georgios Karagiannis, Guang Lin, Faming Liang
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration.
no code implementations • 20 Sep 2020 • Sehwan Kim, Qifan Song, Faming Liang
Bayesian deep learning offers a principled way to address many issues concerning safety of artificial intelligence (AI), such as model uncertainty, model interpretability, and prediction bias.
2 code implementations • ICML 2020 • Wei Deng, Qi Feng, Liyao Gao, Faming Liang, Guang Lin
Replica exchange Monte Carlo (reMC), also known as parallel tempering, is an important technique for accelerating the convergence of the conventional Markov Chain Monte Carlo (MCMC) algorithms.
Ranked #77 on Image Classification on CIFAR-100 (using extra training data)
1 code implementation • 7 Feb 2020 • Qifan Song, Yan Sun, Mao Ye, Faming Liang
Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters.
1 code implementation • NeurIPS 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a novel adaptive empirical Bayesian method for sparse deep learning, where the sparsity is ensured via a class of self-adaptive spike-and-slab priors.
no code implementations • ICLR 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.
1 code implementation • 22 Feb 2018 • Qiwei Li, Xinlei Wang, Faming Liang, Guanghua Xiao
This statistical methodology not only presents a new model for characterizing spatial correlations in a multi-type spatial point pattern, but also provides a new perspective for understanding the role of cell-cell interactions in cancer progression.
Methodology 62M30, 62F15