no code implementations • 28 Apr 2024 • Xue Cheng, Meng Wang, Ziyi Xu
The interactions between a large population of high-frequency traders (HFTs) and a large trader (LT) who executes a certain amount of assets at discrete time points are studied.
no code implementations • 13 Mar 2024 • Ziyi Xu, Xue Cheng
We investigate a market with a normal-speed informed trader (IT) who may employ mixed strategy and multiple anticipatory high-frequency traders (HFTs) who are under different inventory pressures, in a three-period Kyle's model.
no code implementations • 5 Sep 2023 • Ziyi Xu, Marvin Sach, Jan Pirklbauer, Tim Fingscheidt
It provides a reference-free perceptual loss for employing real data during DNS training, maximizing the PESQ scores.
no code implementations • 27 Apr 2023 • Ziyi Xu, Xue Cheng
In an extended Kyle's model, the interactions between a large informed trader and a high-frequency trader (HFT) who can anticipate the former's incoming order are studied.
1 code implementation • 18 Apr 2023 • Ziyi Xu, Ziyue Zhao, Tim Fingscheidt
We illustrate the potential of this model by predicting the PESQ scores of wideband-coded speech obtained from AMR-WB or EVS codecs operating at different bitrates in noisy, tandeming, and error-prone transmission conditions.
no code implementations • 11 Nov 2022 • Ziyi Xu, Xue Cheng
This paper studies the influences of a high-frequency trader (HFT) on a large trader whose future trading is predicted by the former.
no code implementations • 4 May 2022 • Ziyi Xu, Maximilian Strake, Tim Fingscheidt
Detailed analyses show that the DNS trained with the MF-intrusive PESQNet outperforms the Interspeech 2021 DNS Challenge baseline and the same DNS trained with an MSE loss by 0. 23 and 0. 12 PESQ points, respectively.
no code implementations • 6 Nov 2021 • Ziyi Xu, Maximilian Strake, Tim Fingscheidt
Perceptual evaluation of speech quality (PESQ) is a widely used metric for evaluating speech quality.
no code implementations • 31 Mar 2021 • Ziyi Xu, Maximilian Strake, Tim Fingscheidt
During the training process, most of the speech enhancement neural networks are trained in a fully supervised way with losses requiring noisy speech to be synthesized by clean speech and additive noise.