no code implementations • 29 Nov 2023 • Bingzhi Zhang, Junyu Liu, Xiao-Chuan Wu, Liang Jiang, Quntao Zhuang
Via mapping the Hessian of the training dynamics to a Hamiltonian in the imaginary time, we reveal the nature of the phase transition to be second-order with the exponent $\nu=1$, where scale invariance and closing gap are observed at critical point.
no code implementations • 23 Sep 2023 • Senrui Chen, Changhun Oh, Sisi Zhou, Hsin-Yuan Huang, Liang Jiang
In this work, we consider learning algorithms without entanglement to be those that only utilize states, measurements, and operations that are separable between the main system of interest and an ancillary system.
no code implementations • 12 Sep 2023 • Junyu Liu, Liang Jiang
A quantum version of data centers might be significant in the quantum era.
no code implementations • 25 Jul 2023 • Yunfei Wang, Yuri Alexeev, Liang Jiang, Frederic T. Chong, Junyu Liu
Quantum random access memory (QRAM), a fundamental component of many essential quantum algorithms for tasks such as linear algebra, data search, and machine learning, is often proposed to offer $\mathcal{O}(\log N)$ circuit depth for $\mathcal{O}(N)$ data size, given $N$ qubits.
no code implementations • 17 Apr 2023 • Liang Jiang, Liyao Li, Ke Miao, Yichong Zhang
On the other hand, RAs can degrade estimation efficiency due to their estimation errors, which are not asymptotically negligible when the number of regressors is of the same order as the sample size.
no code implementations • 6 Mar 2023 • Junyu Liu, Minzhao Liu, Jin-Peng Liu, Ziyu Ye, Yunfei Wang, Yuri Alexeev, Jens Eisert, Liang Jiang
Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process.
no code implementations • 9 Feb 2023 • Yuehao Bai, Liang Jiang, Joseph P. Romano, Azeem M. Shaikh, Yichong Zhang
This paper studies inference on the average treatment effect in experiments in which treatment status is determined according to "matched pairs" and it is additionally desired to adjust for observed, baseline covariates to gain further precision.
no code implementations • 13 Oct 2022 • Junyu Liu, Frederik Wilde, Antonio Anna Mele, Liang Jiang, Jens Eisert
Saddle points constitute a crucial challenge for first-order gradient descent algorithms.
no code implementations • 23 Sep 2022 • Liang Jiang, Zhenyu Huang, Jia Liu, Zujie Wen, Xi Peng
Such a process will inevitably introduce mismatched pairs (i. e., noisy correspondence) due to i) the unavailable QA pairs in target documents, and ii) the domain shift during applying the QA construction model to the target domain.
no code implementations • 28 Jul 2022 • Junyu Liu, Connor T. Hann, Liang Jiang
In this paper, we propose the Quantum Data Center (QDC), an architecture combining Quantum Random Access Memory (QRAM) and quantum networks.
no code implementations • 19 Jun 2022 • Junyu Liu, Zexi Lin, Liang Jiang
We discuss the difference between laziness and \emph{barren plateau} in quantum machine learning created by quantum physicists in \cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent.
no code implementations • 19 May 2022 • Minzhao Liu, Junyu Liu, Yuri Alexeev, Liang Jiang
Random quantum circuits have been utilized in the contexts of quantum supremacy demonstrations, variational quantum algorithms for chemistry and machine learning, and blackhole information.
no code implementations • 30 Mar 2022 • Junyu Liu, Khadijeh Najafi, Kunal Sharma, Francesco Tacchino, Liang Jiang, Antonio Mezzacapo
We define wide quantum neural networks as parameterized quantum circuits in the limit of a large number of qubits and variational parameters.
no code implementations • 8 Mar 2022 • Ruijie Yan, Shuang Peng, Haitao Mi, Liang Jiang, Shihui Yang, Yuchi Zhang, Jiajun Li, Liangrui Peng, Yongliang Wang, Zujie Wen
Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data.
no code implementations • 31 Jan 2022 • Liang Jiang, Oliver B. Linton, Haihan Tang, Yichong Zhang
We investigate how to improve efficiency using regression adjustments with covariates in covariate-adaptive randomizations (CARs) with imperfect subject compliance.
no code implementations • 8 Nov 2021 • Junyu Liu, Francesco Tacchino, Jennifer R. Glick, Liang Jiang, Antonio Mezzacapo
We analytically solve the dynamics in the frozen limit, or lazy training regime, where variational angles change slowly and a linear perturbation is good enough.
no code implementations • 31 May 2021 • Liang Jiang, Peter C. B. Phillips, Yubo Tao, Yichong Zhang
We establish the consistency and limit distribution of the regression-adjusted QTE estimator and prove that the use of multiplier bootstrap inference is non-conservative under CARs.
no code implementations • 19 Feb 2021 • Changhun Oh, Youngrong Lim, Bill Fefferman, Liang Jiang
Sampling from probability distributions of quantum circuits is a fundamentally and practically important task which can be used to demonstrate quantum supremacy using noisy intermediate-scale quantum devices.
Quantum Physics
no code implementations • 9 Oct 2020 • Jacob C. Curtis, Connor T. Hann, Salvatore S. Elder, Christopher S. Wang, Luigi Frunzio, Liang Jiang, Robert J. Schoelkopf
This detector functions by measuring a series of generalized parity operators which make up the bits in the binary decomposition of the photon number.
Quantum Physics
no code implementations • 25 May 2020 • Liang Jiang, Xiaobin Liu, Peter C. B. Phillips, Yichong Zhang
This paper examines methods of inference concerning quantile treatment effects (QTEs) in randomized experiments with matched-pairs designs (MPDs).
no code implementations • 2 Mar 2020 • Liang Jiang, Zujie Wen, Zhongping Liang, Yafang Wang, Gerard de Melo, Zhe Li, Liangzhuang Ma, Jiaxing Zhang, Xiaolong Li, Yuan Qi
The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher--student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates.
no code implementations • 21 Dec 2018 • Chuan Qin, HengShu Zhu, Tong Xu, Chen Zhu, Liang Jiang, Enhong Chen, Hui Xiong
The wide spread use of online recruitment services has led to information explosion in the job market.
no code implementations • 8 Mar 2018 • Linli Xu, Liang Jiang, Chuan Qin, Zhe Wang, Dongfang Du
Generating poetry from images is much more challenging than generating poetry from text, since images contain very rich visual information which cannot be described completely using several keywords, and a good poem should convey the image accurately.