Search Results for author: Yulong Lu

Found 16 papers, 1 papers with code

A Mean Field Analysis Of Deep ResNet And Beyond: Towards Provably Optimization Via Overparameterization From Depth

no code implementations ICML 2020 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.

Score-based generative models break the curse of dimensionality in learning a family of sub-Gaussian probability distributions

no code implementations12 Feb 2024 Frank Cole, Yulong Lu

While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited.

Image Generation

Optimal Deep Neural Network Approximation for Korobov Functions with respect to Sobolev Norms

no code implementations8 Nov 2023 Yahong Yang, Yulong Lu

This paper establishes the nearly optimal rate of approximation for deep neural networks (DNNs) when applied to Korobov functions, effectively overcoming the curse of dimensionality.

Two-Scale Gradient Descent Ascent Dynamics Finds Mixed Nash Equilibria of Continuous Games: A Mean-Field Perspective

no code implementations17 Dec 2022 Yulong Lu

Finding the mixed Nash equilibria (MNE) of a two-player zero sum continuous game is an important and challenging problem in machine learning.

Transfer Learning Enhanced DeepONet for Long-Time Prediction of Evolution Equations

1 code implementation9 Dec 2022 Wuzhe Xu, Yulong Lu, Li Wang

Deep operator network (DeepONet) has demonstrated great success in various learning tasks, including learning solution operators of partial differential equations.

Transfer Learning

Birth-death dynamics for sampling: Global convergence, approximations and their asymptotics

no code implementations1 Nov 2022 Yulong Lu, Dejan Slepčev, Lihan Wang

Motivated by the challenge of sampling Gibbs measures with nonconvex potentials, we study a continuum birth-death dynamics.

A Regularity Theory for Static Schrödinger Equations on $\mathbb{R}^d$ in Spectral Barron Spaces

no code implementations25 Jan 2022 Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou

Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.

On the Representation of Solutions to Elliptic PDEs in Barron Spaces

no code implementations NeurIPS 2021 Ziang Chen, Jianfeng Lu, Yulong Lu

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.

A Priori Generalization Error Analysis of Two-Layer Neural Networks for Solving High Dimensional Schrödinger Eigenvalue Problems

no code implementations4 May 2021 Jianfeng Lu, Yulong Lu

We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space.

A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations

no code implementations5 Jan 2021 Jianfeng Lu, Yulong Lu, Min Wang

This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.

A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

no code implementations NeurIPS 2020 Yulong Lu, Jianfeng Lu

In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially.

A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth

no code implementations11 Mar 2020 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.

A Mean-field Analysis of Deep ResNet and Beyond:Towards Provable Optimization Via Overparameterization From Depth

no code implementations ICLR Workshop DeepDiffEq 2019 Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying

Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.

Accelerating Langevin Sampling with Birth-death

no code implementations23 May 2019 Yulong Lu, Jianfeng Lu, James Nolen

A fundamental problem in Bayesian inference and statistical machine learning is to efficiently sample from multimodal distributions.

Bayesian Inference

Uniform-in-Time Weak Error Analysis for Stochastic Gradient Descent Algorithms via Diffusion Approximation

no code implementations2 Feb 2019 Yuanyuan Feng, Tingran Gao, Lei LI, Jian-Guo Liu, Yulong Lu

Diffusion approximation provides weak approximation for stochastic gradient descent algorithms in a finite time horizon.

Stochastic Optimization

Scaling limit of the Stein variational gradient descent: the mean field regime

no code implementations10 May 2018 Jianfeng Lu, Yulong Lu, James Nolen

We study an interacting particle system in $\mathbf{R}^d$ motivated by Stein variational gradient descent [Q. Liu and D. Wang, NIPS 2016], a deterministic algorithm for sampling from a given probability density with unknown normalization.

Cannot find the paper you are looking for? You can Submit a new open access paper.