1 code implementation • 19 Nov 2023 • Rafi Ibn Sultan, Chengyin Li, Hui Zhu, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu
The Segment Anything Model (SAM) has shown impressive performance when applied to natural image segmentation.
1 code implementation • 27 Oct 2022 • Hui Zhu, Shi Shu, Jianping Zhang
Based on the variational theory and FAS algorithm, we first design a feature extraction sub-network (FAS-Solution module) to solve the model-driven nonlinear systems, where a skip-connection is employed to fuse the multi-scale features.
no code implementations • 7 Aug 2022 • Feixiang Zhou, Xinyu Yang, Fang Chen, Long Chen, Zheheng Jiang, Hui Zhu, Reiko Heckel, Haikuan Wang, Minrui Fei, Huiyu Zhou
Furthermore, we design a novel Interaction-Aware Transformer (IAT) to dynamically learn the graph-level representation of social behaviours and update the node-level representation, guided by our proposed interaction-aware self-attention mechanism.
no code implementations • 11 Dec 2020 • Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang
In this paper, an automatic lane-level road map generation system is proposed.
no code implementations • 21 Oct 2020 • Hui Zhu, Xiaofang Zhao
Dropout regularization has been widely used in deep learning but performs less effective for convolutional neural networks since the spatially correlated features allow dropped information to still flow through the networks.
no code implementations • 6 Feb 2020 • Xiaoguang Li, Hui Li, Haonan Yan, Zelei Cheng, Wenhai Sun, Hui Zhu
Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries.
no code implementations • 31 Jan 2020 • Chuanguang Yang, Zhulin An, Xiaolong Hu, Hui Zhu, Yongjun Xu
Deep convolutional neural networks (CNN) always depend on wider receptive field (RF) and more complex non-linearity to achieve state-of-the-art performance, while suffering the increased difficult to interpret how relevant patches contribute the final prediction.
no code implementations • 19 Jan 2020 • Hui Zhu, Zhulin An, Kaiqiang Xu, Xiaolong Hu, Yongjun Xu
Existing approaches to improve the performances of convolutional neural networks by optimizing the local architectures or deepening the networks tend to increase the size of models significantly.
no code implementations • 20 Nov 2019 • Xiaolong Hu, Zhulin An, Chuanguang Yang, Hui Zhu, Kaiqaing Xu, Yongjun Xu
For VGG16 pre-trained on ImageNet, our method averagely gains 14. 29\% accuracy promotion for two-classes sub-tasks.
no code implementations • 4 Sep 2019 • Hui Zhu, Zhulin An, Chuanguang Yang, Xiaolong Hu, Kaiqiang Xu, Yongjun Xu
In this paper, we propose a method for efficient automatic architecture search which is special to the widths of networks instead of the connections of neural architecture.
1 code implementation • 26 Aug 2019 • Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu
We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.
Ranked #60 on Image Classification on CIFAR-10
1 code implementation • 10 May 2019 • Hui Zhu, Zhulin An, Chuanguang Yang, Kaiqiang Xu, Erhu Zhao, Yongjun Xu
Latest algorithms for automatic neural architecture search perform remarkable but are basically directionless in search space and computational expensive in training of every intermediate architecture.
no code implementations • 22 Oct 2018 • Hui Zhu
To obtain these results, we generalize the paradifferential calculus of Bony to weighted Sobolev spaces and develop a semiclassical paradifferential calculus.
Analysis of PDEs