no code implementations • 9 Apr 2024 • ChenGuang Liu, Guangshuai Gao, Ziyue Huang, Zhenghui Hu, Qingjie Liu, Yunhong Wang
2) Small object size leads to insufficient information for effective detection.
no code implementations • 7 Apr 2024 • ChenGuang Liu, Chisheng Wang, Feifei Dong, Xin Su, Chuanhua Zhu, Dejin Zhang, Qingquan Li
In this work, we study the performance that can be achieved by state-of-the-art deep learning based edge detectors in publicly available datasets when they are trained from scratch, and devise a new network architecture, the multi-stream and multi scale fusion net (msmsfnet), for edge detection.
no code implementations • 16 Dec 2023 • ChenGuang Liu, Jianjun Chen, Yunfei Chen, Ryan Payton, Michael Riley, Shuang-Hua Yang
The performance of cooperative perception is investigated in different system settings.
no code implementations • 23 Nov 2023 • ChenGuang Liu, Yuxin Zhou, Yunfei Chen, Shuang-Hua Yang
In this paper, we consider the semantic communication (SemCom) system with multiple users, where there is a limited number of training samples and unexpected interference.
no code implementations • 17 Nov 2023 • ChenGuang Liu, Yunfei Chen, Jianjun Chen, Ryan Payton, Michael Riley, Shuang-Hua Yang
A new late fusion scheme is proposed to leverage the robustness of intermediate features.
no code implementations • 23 May 2023 • Kexin Jin, ChenGuang Liu, Jonas Latz
The Stochastic Gradient Langevin Dynamics (SGLD) are popularly used to approximate Bayesian posterior distributions in statistical learning procedures with large-scale data.
no code implementations • 8 Sep 2022 • Kexin Jin, Jonas Latz, ChenGuang Liu, Alessandro Scagliotti
This model is a piecewise-deterministic Markov process that represents the particle movement by an underdamped dynamical system and the data subsampling through a stochastic switching of the dynamical system.
no code implementations • 7 Dec 2021 • Kexin Jin, Jonas Latz, ChenGuang Liu, Carola-Bibiane Schönlieb
Optimization problems with continuous data appear in, e. g., robust machine learning, functional data analysis, and variational inference.
no code implementations • 15 Oct 2020 • Ling Wang, Cheng Zhang, Zejian Luo, ChenGuang Liu, Jie Liu, Xi Zheng, Athanasios Vasilakos
To reduce the computational cost without loss of generality, we present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations, which could mislead the neural network towards erroneous outputs, without a-priori knowledge about the attack type.