no code implementations • 21 Apr 2024 • Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang
With the rise of Machine Learning as a Service (MLaaS) platforms, safeguarding the intellectual property of deep learning models is becoming paramount.
no code implementations • 10 Jan 2024 • Huafeng Qin, Hongyu Zhu, Xin Jin, Qun Song, Mounim A. El-Yacoubi, Xinbo Gao
To this end, we propose a mixed block consisting of three modules, transformer, attention Long short-term memory (attention LSTM), and Fourier transformer.
1 code implementation • 16 Sep 2023 • Hongyu Zhu, Sichu Liang, Wentao Hu, Fang-Qi Li, Yali Yuan, Shi-Lin Wang, Guang Cheng
As a modern ensemble technique, Deep Forest (DF) employs a cascading structure to construct deep models, providing stronger representational power compared to traditional decision forests.
no code implementations • 26 Feb 2022 • Hongyu Zhu, Stefan Klus, Tuhin Sahai
Our proposed method uses the existing wave equation clustering algorithm that is based on propagating waves through the graph.
1 code implementation • 16 Dec 2021 • Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, Haifeng Wang
For this purpose, we create a Chinese dataset namely DuQM which contains natural questions with linguistic perturbations to evaluate the robustness of question matching models.
no code implementations • 10 Oct 2021 • Dominic Flocco, Bryce Palmer-Toy, Ruixiao Wang, Hongyu Zhu, Rishi Sonthalia, Junyuan Lin, Andrea L. Bertozzi, P. Jeffrey Brantingham
The construction and application of knowledge graphs have seen a rapid increase across many disciplines in recent years.
no code implementations • 25 Feb 2021 • Jingjing Li, Zhuo Sun, Lei Zhang, Hongyu Zhu
The security constraints of this method is constructed only with the input and output signal samples of the legal and eavesdropper channels and benefit that training the encoder is completely independent of the decoder.
no code implementations • 5 Jun 2020 • Hongyu Zhu, Amar Phanishayee, Gennady Pekhimenko
Modern deep neural network (DNN) training jobs use complex and heterogeneous software/hardware stacks.
4 code implementations • CVPR 2019 • Yunbo Wang, Jianjin Zhang, Hongyu Zhu, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Natural spatiotemporal processes can be highly non-stationary in many ways, e. g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting.
Ranked #5 on Video Prediction on Human3.6M
no code implementations • 16 Mar 2018 • Hongyu Zhu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Amar Phanishayee, Bianca Schroeder, Gennady Pekhimenko
Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD (TBD is short for Training Benchmark for DNNs), that uses a representative set of DNN models that cover a wide range of machine learning applications: image classification, machine translation, speech recognition, object detection, adversarial networks, reinforcement learning, and (ii) by performing an extensive performance analysis of training these different applications on three major deep learning frameworks (TensorFlow, MXNet, CNTK) across different hardware configurations (single-GPU, multi-GPU, and multi-machine).