no code implementations • 14 Dec 2023 • Haoyu Tang, Louis J. Durlofsky
Results are presented for a large set of test cases, in which five injection wells and five production wells are placed randomly throughout the model, with a random control variable (bottom-hole pressure) assigned to each well.
1 code implementation • 12 Dec 2023 • Haoyu Tang, Han Jiang, Mingzhu Xu, Yupeng Hu, Jihua Zhu, Liqiang Nie
Thereafter, we design two (constant- and variable- speed) incremental instance learning strategies for easy-to-hard model training, thus ensuring the reliability of these video pseudolabels and further improving overall localization performance.
1 code implementation • 29 Aug 2023 • Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework.
no code implementations • 23 Mar 2023 • Haoyu Tang, Zhaoyi Liu, Chang Zeng, Xinfeng Li
To overcome the drawback of universal Transformer models for the application of ASR on edge devices, we propose a solution that can reuse the block in Transformer models for the occasion of the small footprint ASR system, which meets the objective of accommodating resource limitations without compromising recognition accuracy.
Ranked #11 on Speech Recognition on AISHELL-1
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • CVPR 2023 • Qinghai Zheng, Jihua Zhu, Haoyu Tang
In this work, we focus on the challenging problem of Label Enhancement (LE), which aims to exactly recover label distributions from logical labels, and present a novel Label Information Bottleneck (LIB) method for LE.
no code implementations • 5 Feb 2022 • Haoyu Tang, Wennan Long
Reservoir simulations are computationally expensive in the well control and well placement optimization.
no code implementations • 30 Oct 2021 • Haoyu Tang, Louis J. Durlofsky
In this work, we present an optimization framework in which these simulations are performed with low-fidelity (LF) models.
no code implementations • 19 Oct 2020 • Qinghai Zheng, Yu Zhang, Jihua Zhu, Zhongyu Li, Haoyu Tang, Shuangxun Ma
It can be seen that specific information contained in different views is fully investigated by the rank preserving decomposition, and the high-order correlations of multi-view data are also mined by the low-rank tensor constraint.
1 code implementation • 22 Sep 2020 • Haoyu Tang, Jihua Zhu, Meng Liu, Member, IEEE, Zan Gao, Zhiyong Cheng
Another contribution is that we propose an additional predictor to utilize the internal frames in the model training to improve the localization accuracy.
no code implementations • 7 Apr 2020 • Qinghai Zheng, Jihua Zhu, Haoyu Tang, Xinyuan Liu, Zhongyu Li, Huimin Lu
Recently, label distribution learning (LDL) has drawn much attention in machine learning, where LDL model is learned from labelel instances.