no code implementations • 8 Mar 2024 • Jian Zhu, YuPing Ruan, Jingfei Chang, Cheng Luo
To address the problem, we propose a novel Deep Prompt Multi-task Network (DPMN) for abuse language detection.
no code implementations • 22 Aug 2023 • Jian Zhu, Mingkai Sheng, Mingda Ke, Zhangmin Huang, Jingfei Chang
In this way, it can greatly improve the retrieval performance of multi-modal hashing methods.
1 code implementation • 6 Oct 2022 • Ping Xue, Yang Lu, Jingfei Chang, Xing Wei, Zhen Wei
In contrast, considering the limited learning ability and information loss caused by the limited representational capability of BNNs, we propose IR$^2$Net to stimulate the potential of BNNs and improve the network accuracy by restricting the input information and recovering the feature information, including: 1) information restriction: for a BNN, by evaluating the learning ability on the input information, discarding some of the information it cannot focus on, and limiting the amount of input information to match its learning ability; 2) information recovery: due to the information loss in forward propagation, the output feature information of the network is not enough to support accurate classification.
no code implementations • 31 Aug 2021 • Jingfei Chang, Yang Lu, Ping Xue, Yiqun Xu, Zhen Wei
We propose a novel adversarial iterative pruning method (AIP) for CNNs based on knowledge transfer.
1 code implementation • 3 Mar 2021 • Ping Xue, Yang Lu, Jingfei Chang, Xing Wei, Zhen Wei
In this work, we study the binary neural networks (BNNs) of which both the weights and activations are binary (i. e., 1-bit representation).
1 code implementation • 16 Jan 2021 • Jingfei Chang, Yang Lu, Ping Xue, Yiqun Xu, Zhen Wei
While the accuracy loss after pruning based on the structure sensitivity is relatively slight, the process is time-consuming and the algorithm complexity is notable.
no code implementations • 13 Oct 2020 • Jingfei Chang
The existing convolutional neural network pruning algorithms can be divided into two categories: coarse-grained clipping and fine-grained clipping.
no code implementations • 3 Oct 2020 • Jingfei Chang, Yang Lu, Ping Xue, Xing Wei, Zhen Wei
For ResNet with bottlenecks, we use the pruning method with traditional CNN to trim the 3x3 convolutional layer in the middle of the blocks.