no code implementations • 28 Nov 2023 • Yuqi Wang, Aarzu Gupta, David Carpenter, Trey Mullikin, Zachary J. Reitman, Scott Floyd, John Kirkpatrick, Joseph K. Salama, Paul W. Sperduto, Jian-Guo Liu, Mustafa R. Bashir, Kyle J. Lafata
We evaluated our method on multiple clinically-relevant endpoints, including time to intracranial progression (ICP), progression-free survival (PFS) after SRS, overall survival (OS), and time to ICP and/or death (ICPD), on a variety of both statistical and non-statistical models, including CoxPH, conditional survival forest (CSF), and neural multi-task linear regression (NMTLR).
no code implementations • 17 Jan 2023 • Lei LI, Jian-Guo Liu, Yuliang Wang
We consider the geometric ergodicity of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm under nonconvexity settings.
no code implementations • 7 Feb 2021 • Jian-Guo Liu, Xiangsheng Xu
In this paper we study a cross-diffusion system whose coefficient matrix is non-symmetric and degenerate.
Analysis of PDEs
no code implementations • 22 May 2020 • Yuan Gao, Jian-Guo Liu, Nan Wu
To construct an efficient and stable approximation for the Langevin dynamics on $\mathcal{N}$, we leverage the corresponding Fokker-Planck equation on the manifold $\mathcal{N}$ in terms of the reaction coordinates $\mathsf{y}$.
no code implementations • 14 Feb 2020 • Jian-Guo Liu, Abdul-Majid Wazwaz
Under investigation is a new (3+1)-dimensional Boiti-Leon-Manna-Pempinelli equation.
Pattern Formation and Solitons Mathematical Physics Mathematical Physics Exactly Solvable and Integrable Systems
no code implementations • 9 Feb 2019 • Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu
We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference.
no code implementations • 2 Feb 2019 • Yuanyuan Feng, Tingran Gao, Lei LI, Jian-Guo Liu, Yulong Lu
Diffusion approximation provides weak approximation for stochastic gradient descent algorithms in a finite time horizon.
no code implementations • 22 May 2017 • Wenqing Hu, Chris Junchi Li, Lei LI, Jian-Guo Liu
In addition, we discuss the effects of batch size for the deep neural networks, and we find that small batch size is helpful for SGD algorithms to escape unstable stationary points and sharp minimizers.
1 code implementation • 28 Dec 2016 • Zhihua Ban, Jian-Guo Liu, Li Cao
Based on this assumption, each pixel is supposed to be drawn from a mixture of Gaussian distributions with unknown parameters (GMM).