no code implementations • 9 Jul 2023 • Qijia Jiang
We consider Bayesian linear regression with sparsity-inducing prior and design efficient sampling algorithms leveraging posterior contraction properties.
no code implementations • 15 Sep 2022 • Qijia Jiang
We report on what seems to be an intriguing connection between variable integration time and partial velocity refreshment of Ideal Hamiltonian Monte Carlo samplers, both of which can be used for reducing the dissipative behavior of the dynamics.
no code implementations • NeurIPS 2021 • Qijia Jiang
Motivated by the connection between sampling and optimization, we study a mirror descent analogue of Langevin dynamics and analyze three different discretization schemes, giving nonasymptotic convergence rate under functional inequalities such as Log-Sobolev in the corresponding metric.
no code implementations • 8 Jun 2020 • Heinrich Jiang, Qijia Jiang, Aldo Pacchiano
Learning under one-sided feedback (i. e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems.
no code implementations • ICML 2020 • Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta
We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.
no code implementations • NeurIPS 2019 • Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
Namely we consider optimization algorithms interacting with a highly parallel gradient oracle, that is one that can answer $\mathrm{poly}(d)$ gradient queries in parallel.
1 code implementation • ICLR 2019 • Yu Bai, Qijia Jiang, Ju Sun
This paper concerns dictionary learning, i. e., sparse coding, a fundamental representation learning problem.