Search Results for author: Zhaoqiang Liu

Found 19 papers, 7 papers with code

Accelerating Diffusion Sampling with Optimized Time Steps

no code implementations27 Feb 2024 Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, Zhenguo Li

While this is a significant development, most sampling methods still employ uniform time steps, which is not optimal when using a small number of steps.

Image Generation

The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling

no code implementations23 Feb 2024 Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi

Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1. 58) with only 39 NFEs (1. 57).

Image Generation

On the Expressive Power of a Variant of the Looped Transformer

no code implementations21 Feb 2024 Yihang Gao, Chuanyang Zheng, Enze Xie, Han Shi, Tianyang Hu, Yu Li, Michael K. Ng, Zhenguo Li, Zhaoqiang Liu

Previous works try to explain this from the expressive power and capability perspectives that standard transformers are capable of performing some algorithms.

Solving Quadratic Systems with Full-Rank Matrices Using Sparse or Generative Priors

no code implementations16 Sep 2023 Junren Chen, Shuai Huang, Michael K. Ng, Zhaoqiang Liu

The problem of recovering a signal $\boldsymbol{x} \in \mathbb{R}^n$ from a quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots, m\}$ with full-rank matrices $\boldsymbol{A}_i$ frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging.

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

1 code implementation ICCV 2023 Enze Xie, Lewei Yao, Han Shi, Zhili Liu, Daquan Zhou, Zhaoqiang Liu, Jiawei Li, Zhenguo Li

This paper proposes DiffFit, a parameter-efficient strategy to fine-tune large pre-trained diffusion models that enable fast adaptation to new domains.

Efficient Diffusion Personalization

Misspecified Phase Retrieval with Generative Priors

1 code implementation11 Oct 2022 Zhaoqiang Liu, Xinshao Wang, Jiulong Liu

In this paper, we study phase retrieval under model misspecification and generative priors.

Retrieval

Projected Gradient Descent Algorithms for Solving Nonlinear Inverse Problems with Generative Priors

no code implementations21 Sep 2022 Zhaoqiang Liu, Jun Han

We show that when there is no representation error and the sensing vectors are Gaussian, roughly $O(k \log L)$ samples suffice to ensure that a PGD algorithm converges linearly to a point achieving the optimal statistical rate using arbitrary initialization.

Non-Iterative Recovery from Nonlinear Observations using Generative Models

no code implementations CVPR 2022 Jiulong Liu, Zhaoqiang Liu

In this paper, we aim to estimate the direction of an underlying signal from its nonlinear observations following the semi-parametric single index model (SIM).

Generative Principal Component Analysis

1 code implementation ICLR 2022 Zhaoqiang Liu, Jiulong Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

We perform experiments on various image datasets for spiked matrix and phase retrieval models, and illustrate performance gains of our method to the classic power method and the truncated power method devised for sparse principal component analysis.

Retrieval

Robust 1-bit Compressive Sensing with Partial Gaussian Circulant Matrices and Generative Priors

no code implementations8 Aug 2021 Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

In 1-bit compressive sensing, each measurement is quantized to a single bit, namely the sign of a linear function of an unknown vector, and the goal is to accurately recover the vector.

Compressive Sensing

Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors

1 code implementation NeurIPS 2021 Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett

We also adapt this result to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional, matching an information-theoretic lower bound.

Compressive Sensing Retrieval

The Generalized Lasso with Nonlinear Observations and Generative Priors

no code implementations NeurIPS 2020 Zhaoqiang Liu, Jonathan Scarlett

We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models.

Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable Embeddings with Generative Priors

1 code implementation ICML 2020 Zhaoqiang Liu, Selwyn Gomes, Avtansh Tiwari, Jonathan Scarlett

The goal of standard 1-bit compressive sensing is to accurately recover an unknown sparse vector from binary-valued measurements, each indicating the sign of a linear function of the vector.

Compressive Sensing

Sample Complexity Lower Bounds for Compressive Sensing with Generative Models

no code implementations NeurIPS Workshop Deep_Invers 2019 Zhaoqiang Liu, Jonathan Scarlett

The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.

Compressive Sensing

Information-Theoretic Lower Bounds for Compressive Sensing with Generative Models

no code implementations28 Aug 2019 Zhaoqiang Liu, Jonathan Scarlett

It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model.

Compressive Sensing

Model Selection for Nonnegative Matrix Factorization by Support Union Recovery

no code implementations23 Oct 2018 Zhaoqiang Liu

Nonnegative matrix factorization (NMF) has been widely used in machine learning and signal processing because of its non-subtractive, part-based property which enhances interpretability.

Model Selection

The Informativeness of K -Means for Learning Mixture Models

no code implementations30 Mar 2017 Zhaoqiang Liu, Vincent Y. F. Tan

These results provide intuition for the informativeness of $k$-means (with and without dimensionality reduction) as an algorithm for learning mixture models.

Clustering Dimensionality Reduction +1

Rank-One NMF-Based Initialization for NMF and Relative Error Bounds under a Geometric Assumption

1 code implementation27 Dec 2016 Zhaoqiang Liu, Vincent Y. F. Tan

We propose a geometric assumption on nonnegative data matrices such that under this assumption, we are able to provide upper bounds (both deterministic and probabilistic) on the relative error of nonnegative matrix factorization (NMF).

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.