Search Results for author: Zhenghao Xu

Found 3 papers, 0 papers with code

Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult

no code implementations26 Oct 2023 Yuqing Wang, Zhenghao Xu, Tuo Zhao, Molei Tao

This regularity, together with gradient descent using a large learning rate that favors flatter regions, results in these nontrivial dynamical behaviors.

Score Matching-based Pseudolikelihood Estimation of Neural Marked Spatio-Temporal Point Process with Uncertainty Quantification

no code implementations25 Oct 2023 Zichong Li, Qunzhi Xu, Zhenghao Xu, Yajun Mei, Tuo Zhao, Hongyuan Zha

Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of marked STPPs through score-matching and offers uncertainty quantification for the predicted event time, location and mark by computing confidence regions over the generated samples.

Point Processes Uncertainty Quantification

Sample Complexity of Neural Policy Mirror Descent for Policy Optimization on Low-Dimensional Manifolds

no code implementations25 Sep 2023 Zhenghao Xu, Xiang Ji, Minshuo Chen, Mengdi Wang, Tuo Zhao

As a result, by properly choosing the network size and hyperparameters, NPMD can find an $\epsilon$-optimal policy with $\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})$ samples in expectation, where $\alpha\in(0, 1]$ indicates the smoothness of environment.

Policy Gradient Methods Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.