1 code implementation • 28 Aug 2023 • Rui Zhang, Hongxia Wang, Mingshan Du, Hanqing Liu, Yang Zhou, Qiang Zeng
Our approach introduces a Temporal Feature Abnormal Attention (TFAA) module based on temporal feature reconstruction to enhance the detection of temporal differences.
Ranked #1 on Temporal Forgery Localization on LAV-DF
no code implementations • 16 Aug 2023 • Ziyang Yuan, Haoxing Yang, Ningyi Leng, Hongxia Wang
Furthermore, two methods called Background Douglas-Rachford (BDR) and Convex Background Douglas-Rachford (CBDR) are proposed.
1 code implementation • 16 Jul 2023 • Liyuan Ma, Hongxia Wang, Ningyi Leng, Ziyang Yuan
FPR with few measurements is important for reducing time and hardware costs, but it suffers from serious ill-posedness.
1 code implementation • CVPR 2023 • Zhemin Li, Hongxia Wang, Deyu Meng
The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR.
no code implementations • 23 Oct 2022 • Liyuan Ma, Hongxia Wang, Ningyi Leng, Ziyang Yuan
Then an untrained generative network is embedded in the iterative process of ADMM to project an estimated signal into the generative space, and the projected signal is applied to next iteration of ADMM.
1 code implementation • 11 Aug 2022 • Zhemin Li, Tao Sun, Hongxia Wang, Bao Wang
Theoretically, we show that the adaptive regularization of \ReTwo{AIR} enhances the implicit regularization and vanishes at the end of training.
no code implementations • 9 Aug 2022 • Ke Chen, Dandan Jiang, Bo wang, Hongxia Wang
Firstly, the fault detection matrix is constructed and the event detection problem is reformatted as a two-sample covariance matrices test problem.
2 code implementations • 12 Oct 2021 • Zhemin Li, Tao Sun, Hongxia Wang, Bao Wang
Theoretically, we show that the adaptive regularization of AIR enhances the implicit regularization and vanishes at the end of training.
1 code implementation • 28 Sep 2020 • Xianchen Zhou, Yaoyun Zeng, Hongxia Wang
Different from the original GAT, which uses the attention mechanism for different edges but is still sensitive to the perturbation, RoGAT adds an extra dynamic attention score progressively and improves the robustness.
2 code implementations • 29 Jul 2020 • Zhemin Li, Zhi-Qin John Xu, Tao Luo, Hongxia Wang
In this work, we propose a Regularized Deep Matrix Factorized (RDMF) model for image restoration, which utilizes the implicit bias of the low rank of deep neural networks and the explicit bias of total variation.