no code implementations • 8 Mar 2024 • Yitao Zhu, Sheng Wang, Mengjie Xu, Zixu Zhuang, Zhixin Wang, Kaidong Wang, Han Zhang, Qian Wang
Next, instead of simply averaging models across views, we train a network to determine the weights of individual views for their fusion, based on the parameters estimated for joints and hands of human body as well as camera positions.
no code implementations • 4 Oct 2023 • Kaidong Wang, Yao Wang, Xiuwu Liao, Shaojie Tang, Can Yang, Deyu Meng
For the model, we establish a rigorous mathematical representation of the dynamic graph, based on which we derive a new tensor-oriented graph smoothness regularization.
1 code implementation • 12 Feb 2023 • Biao Xu, Yao Wang, Xiuwu Liao, Kaidong Wang
In this paper, we propose deep boosting decision trees (DBDT), a novel approach for fraud detection based on gradient boosting and neural networks.
no code implementations • 23 Aug 2021 • Qianxin Yi, Chenhao Wang, Kaidong Wang, Yao Wang
Low-tubal-rank tensor approximation has been proposed to analyze large-scale and multi-dimensional data.
no code implementations • 23 Jun 2021 • Shao-Bo Lin, Kaidong Wang, Yao Wang, Ding-Xuan Zhou
Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind.
no code implementations • 20 Jun 2017 • Kaidong Wang, Yao Wang, Qian Zhao, Deyu Meng, Zongben Xu
Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which is proved to be very sensitive to random noise/outliers.