Search Results for author: Yang Sui

Found 13 papers, 1 papers with code

DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models

no code implementations5 Feb 2024 Yang Sui, Huy Phan, Jinqi Xiao, Tianfang Zhang, Zijie Tang, Cong Shi, Yan Wang, Yingying Chen, Bo Yuan

In this paper, for the first time, we systematically explore the detectability of the poisoned noise input for the backdoored diffusion models, an important performance metric yet little explored in the existing works.

Backdoor Attack

ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks

no code implementations18 Jan 2024 Yang Sui, Miao Yin, Yu Gong, Jinqi Xiao, Huy Phan, Bo Yuan

Low-rank compression, a popular model compression technique that produces compact convolutional neural networks (CNNs) with low rankness, has been well-studied in the literature.

Low-rank compression Model Compression

Transferable Learned Image Compression-Resistant Adversarial Perturbations

no code implementations6 Jan 2024 Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen

Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.

Adversarial Attack Autonomous Driving +4

In-Sensor Radio Frequency Computing for Energy-Efficient Intelligent Radar

no code implementations16 Dec 2023 Yang Sui, Minning Zhu, Lingyi Huang, Chung-Tse Michael Wu, Bo Yuan

Radio Frequency Neural Networks (RFNNs) have demonstrated advantages in realizing intelligent applications across various domains.

Corner-to-Center Long-range Context Model for Efficient Learned Image Compression

no code implementations29 Nov 2023 Yang Sui, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Bo Yuan, Zhenzhong Chen

To tackle this issue, we conduct an in-depth analysis of the performance degradation observed in existing parallel context models, focusing on two aspects: the Quantity and Quality of information utilized for context prediction and decoding.

Image Compression

Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations

no code implementations1 Jun 2023 Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen

Learned Image Compression (LIC) has recently become the trending technique for image transmission due to its notable performance.

Image Compression Image Reconstruction

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

no code implementations20 Jan 2023 Jinqi Xiao, Chengming Zhang, Yu Gong, Miao Yin, Yang Sui, Lizhi Xiang, Dingwen Tao, Bo Yuan

By interpreting automatic rank selection from an architecture search perspective, we develop an end-to-end solution to determine the suitable layer-wise ranks in a differentiable and hardware-aware way.

Low-rank compression Model Compression

Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition

no code implementations5 Dec 2022 Yu Gong, Miao Yin, Lingyi Huang, Chunhua Deng, Yang Sui, Bo Yuan

Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieves better performance in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload.

Tensor Decomposition Video Recognition

CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness

no code implementations4 Dec 2022 Huy Phan, Miao Yin, Yang Sui, Bo Yuan, Saman Zonouz

Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks.

Adversarial Robustness Model Compression

CHIP: CHannel Independence-based Pruning for Compact Neural Networks

1 code implementation NeurIPS 2021 Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Zonouz, Bo Yuan

Filter pruning has been widely used for neural network compression because of its enabled practical acceleration.

Neural Network Compression

SPARK: co-exploring model SPArsity and low-RanKness for compact neural networks

no code implementations29 Sep 2021 Wanzhao Yang, Miao Yin, Yang Sui, Bo Yuan

Based on the observations and outcomes from our analysis, we then propose SPARK, a unified DNN compression framework that can simultaneously capture model SPArsity and low-RanKness in an efficient way.

Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework

no code implementations CVPR 2021 Miao Yin, Yang Sui, Siyu Liao, Bo Yuan

Notably, on CIFAR-100, with 2. 3X and 2. 4X compression ratios, our models have 1. 96% and 2. 21% higher top-1 accuracy than the original ResNet-20 and ResNet-32, respectively.

Image Classification Model Compression +2

Cannot find the paper you are looking for? You can Submit a new open access paper.