no code implementations • 5 Feb 2024 • Yang Sui, Huy Phan, Jinqi Xiao, Tianfang Zhang, Zijie Tang, Cong Shi, Yan Wang, Yingying Chen, Bo Yuan
In this paper, for the first time, we systematically explore the detectability of the poisoned noise input for the backdoored diffusion models, an important performance metric yet little explored in the existing works.
no code implementations • 18 Jan 2024 • Yang Sui, Miao Yin, Yu Gong, Jinqi Xiao, Huy Phan, Bo Yuan
Low-rank compression, a popular model compression technique that produces compact convolutional neural networks (CNNs) with low rankness, has been well-studied in the literature.
no code implementations • 6 Jan 2024 • Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
no code implementations • 16 Dec 2023 • Yang Sui, Minning Zhu, Lingyi Huang, Chung-Tse Michael Wu, Bo Yuan
Radio Frequency Neural Networks (RFNNs) have demonstrated advantages in realizing intelligent applications across various domains.
no code implementations • 29 Nov 2023 • Yang Sui, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Bo Yuan, Zhenzhong Chen
To tackle this issue, we conduct an in-depth analysis of the performance degradation observed in existing parallel context models, focusing on two aspects: the Quantity and Quality of information utilized for context prediction and decoding.
no code implementations • 1 Jun 2023 • Yang Sui, Zhuohang Li, Ding Ding, Xiang Pan, Xiaozhong Xu, Shan Liu, Zhenzhong Chen
Learned Image Compression (LIC) has recently become the trending technique for image transmission due to its notable performance.
no code implementations • 20 Jan 2023 • Jinqi Xiao, Chengming Zhang, Yu Gong, Miao Yin, Yang Sui, Lizhi Xiang, Dingwen Tao, Bo Yuan
By interpreting automatic rank selection from an architecture search perspective, we develop an end-to-end solution to determine the suitable layer-wise ranks in a differentiable and hardware-aware way.
no code implementations • 5 Dec 2022 • Yu Gong, Miao Yin, Lingyi Huang, Chunhua Deng, Yang Sui, Bo Yuan
Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieves better performance in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload.
no code implementations • 4 Dec 2022 • Huy Phan, Miao Yin, Yang Sui, Bo Yuan, Saman Zonouz
Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks.
no code implementations • CVPR 2022 • Miao Yin, Yang Sui, Wanzhao Yang, Xiao Zang, Yu Gong, Bo Yuan
High-order decomposition is a widely used model compression approach towards compact convolutional neural networks (CNNs).
1 code implementation • NeurIPS 2021 • Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Zonouz, Bo Yuan
Filter pruning has been widely used for neural network compression because of its enabled practical acceleration.
no code implementations • 29 Sep 2021 • Wanzhao Yang, Miao Yin, Yang Sui, Bo Yuan
Based on the observations and outcomes from our analysis, we then propose SPARK, a unified DNN compression framework that can simultaneously capture model SPArsity and low-RanKness in an efficient way.
no code implementations • CVPR 2021 • Miao Yin, Yang Sui, Siyu Liao, Bo Yuan
Notably, on CIFAR-100, with 2. 3X and 2. 4X compression ratios, our models have 1. 96% and 2. 21% higher top-1 accuracy than the original ResNet-20 and ResNet-32, respectively.