Search Results for author: ChenGuang Liu

Found 9 papers, 0 papers with code

Msmsfnet: a multi-stream and multi-scale fusion net for edge detection

no code implementations7 Apr 2024 ChenGuang Liu, Chisheng Wang, Feifei Dong, Xin Su, Chuanhua Zhu, Dejin Zhang, Qingquan Li

In this work, we study the performance that can be achieved by state-of-the-art deep learning based edge detectors in publicly available datasets when they are trained from scratch, and devise a new network architecture, the multi-stream and multi scale fusion net (msmsfnet), for edge detection.

Edge Detection Fairness

Knowledge Distillation Based Semantic Communications For Multiple Users

no code implementations23 Nov 2023 ChenGuang Liu, Yuxin Zhou, Yunfei Chen, Shuang-Hua Yang

In this paper, we consider the semantic communication (SemCom) system with multiple users, where there is a limited number of training samples and unexpected interference.

Knowledge Distillation Model Compression +1

Subsampling Error in Stochastic Gradient Langevin Diffusions

no code implementations23 May 2023 Kexin Jin, ChenGuang Liu, Jonas Latz

The Stochastic Gradient Langevin Dynamics (SGLD) are popularly used to approximate Bayesian posterior distributions in statistical learning procedures with large-scale data.

Losing momentum in continuous-time stochastic optimisation

no code implementations8 Sep 2022 Kexin Jin, Jonas Latz, ChenGuang Liu, Alessandro Scagliotti

This model is a piecewise-deterministic Markov process that represents the particle movement by an underdamped dynamical system and the data subsampling through a stochastic switching of the dynamical system.

Image Classification

A Continuous-time Stochastic Gradient Descent Method for Continuous Data

no code implementations7 Dec 2021 Kexin Jin, Jonas Latz, ChenGuang Liu, Carola-Bibiane Schönlieb

Optimization problems with continuous data appear in, e. g., robust machine learning, functional data analysis, and variational inference.

Stochastic Optimization Variational Inference

Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things

no code implementations15 Oct 2020 Ling Wang, Cheng Zhang, Zejian Luo, ChenGuang Liu, Jie Liu, Xi Zheng, Athanasios Vasilakos

To reduce the computational cost without loss of generality, we present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations, which could mislead the neural network towards erroneous outputs, without a-priori knowledge about the attack type.

Cannot find the paper you are looking for? You can Submit a new open access paper.