Search Results for author: Ruqi Zhang

Found 22 papers, 16 papers with code

Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World

1 code implementation29 Mar 2024 Bowen Lei, Dongkuan Xu, Ruqi Zhang, Bani Mallick

Sparse training has emerged as a promising method for resource-efficient deep neural networks (DNNs) in real-world applications.

Gradient-based Discrete Sampling with Automatic Cyclical Scheduling

1 code implementation27 Feb 2024 Patrick Pynadath, Riddhiman Bhattacharya, Arun Hariharan, Ruqi Zhang

Discrete distributions, particularly in high-dimensional deep models, are often highly multimodal due to inherent discontinuities.

Scheduling

Training Bayesian Neural Networks with Sparse Subspace Variational Inference

1 code implementation16 Feb 2024 Junbo Li, Zichen Miao, Qiang Qiu, Ruqi Zhang

Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs.

Uncertainty Quantification Variational Inference

Enhancing Low-Precision Sampling via Stochastic Gradient Hamiltonian Monte Carlo

no code implementations25 Oct 2023 Ziyi Wang, Yujie Chen, Qifan Song, Ruqi Zhang

This paper investigates low-precision sampling via Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) with low-precision and full-precision gradient accumulators for both strongly log-concave and non-log-concave distributions.

Quantization Uncertainty Quantification

Entropy-MCMC: Sampling from Flat Basins with Ease

1 code implementation9 Oct 2023 Bolian Li, Ruqi Zhang

Bayesian deep learning counts on the quality of posterior distribution estimation.

Out-of-Distribution Detection

Rethinking Data Distillation: Do Not Overlook Calibration

1 code implementation ICCV 2023 Dongyao Zhu, Bowen Lei, Jie Zhang, Yanbo Fang, Ruqi Zhang, Yiqun Xie, Dongkuan Xu

Neural networks trained on distilled data often produce over-confident output and require correction by calibration methods.

DP-Fast MH: Private, Fast, and Accurate Metropolis-Hastings for Large-Scale Bayesian Inference

1 code implementation10 Mar 2023 Wanrong Zhang, Ruqi Zhang

In this paper, we study Metropolis-Hastings (MH), one of the most fundamental MCMC methods, for large-scale Bayesian inference under differential privacy.

Bayesian Inference Medical Diagnosis +1

Long-tailed Classification from a Bayesian-decision-theory Perspective

no code implementations10 Mar 2023 Bolian Li, Ruqi Zhang

Long-tailed classification poses a challenge due to its heavy imbalance in class probabilities and tail-sensitivity risks with asymmetric misprediction costs.

Classification

Efficient Informed Proposals for Discrete Distributions via Newton's Series Approximation

no code implementations27 Feb 2023 Yue Xiang, Dongyao Zhu, Bowen Lei, Dongkuan Xu, Ruqi Zhang

Gradients have been exploited in proposal distributions to accelerate the convergence of Markov chain Monte Carlo algorithms on discrete distributions.

Efficient Exploration Extractive Text Summarization +2

Calibrating the Rigged Lottery: Making All Tickets Reliable

1 code implementation18 Feb 2023 Bowen Lei, Ruqi Zhang, Dongkuan Xu, Bani Mallick

Previous research has shown that deep neural networks tend to be over-confident, and we find that sparse training exacerbates this problem.

Decision Making

Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

1 code implementation9 Jan 2023 Bowen Lei, Dongkuan Xu, Ruqi Zhang, Shuren He, Bani K. Mallick

To accelerate and stabilize the convergence of sparse training, we analyze the gradient changes and develop an adaptive gradient correction method.

Sampling in Constrained Domains with Orthogonal-Space Variational Gradient Descent

1 code implementation12 Oct 2022 Ruqi Zhang, Qiang Liu, Xin T. Tong

Sampling methods, as important inference and learning techniques, are typically designed for unconstrained domains.

Fairness

Low-Precision Stochastic Gradient Langevin Dynamics

1 code implementation20 Jun 2022 Ruqi Zhang, Andrew Gordon Wilson, Christopher De Sa

While low-precision optimization has been widely used to accelerate deep learning, low-precision sampling remains largely unexplored.

Quantization

A Langevin-like Sampler for Discrete Distributions

1 code implementation20 Jun 2022 Ruqi Zhang, Xingchao Liu, Qiang Liu

We propose discrete Langevin proposal (DLP), a simple and scalable gradient-based proposal for sampling complex high-dimensional discrete distributions.

Efficient Exploration Text Generation

Meta-Learning Divergences of Variational Inference

no code implementations6 Jul 2020 Ruqi Zhang, Yingzhen Li, Christopher De Sa, Sam Devlin, Cheng Zhang

Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability.

Bayesian Inference Computational Efficiency +4

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

1 code implementation NeurIPS 2020 Ruqi Zhang, A. Feder Cooper, Christopher De Sa

Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can be intractable on large datasets due to requiring computations over the whole dataset.

regression

AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC

1 code implementation29 Feb 2020 Ruqi Zhang, A. Feder Cooper, Christopher De Sa

This improves performance, but introduces bias that can cause SGHMC to converge to the wrong distribution.

Poisson-Minibatching for Gibbs Sampling with Convergence Rate Guarantees

1 code implementation NeurIPS 2019 Ruqi Zhang, Christopher De Sa

Gibbs sampling is a Markov chain Monte Carlo method that is often used for learning and inference on graphical models.

Meta-Learning for Variational Inference

no code implementations pproximateinference AABI Symposium 2019 Ruqi Zhang, Yingzhen Li, Chris De Sa, Sam Devlin, Cheng Zhang

Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and general applicability.

Bayesian Inference Computational Efficiency +4

Cannot find the paper you are looking for? You can Submit a new open access paper.