no code implementations • 27 Apr 2024 • Zheng Cheng, Guodong Fan, Jingchun Zhou, Min Gan, C. L. Philip Chen
The FDCE-Net consists of two main structures: (1) Frequency Spatial Network (FS-Net) aims to achieve initial enhancement by utilizing our designed Frequency Spatial Residual Block (FSRB) to decouple image degradation factors in the frequency domain and enhance different attributes separately.
no code implementations • 26 Apr 2024 • Zishu Yao, Guodong Fan, Jinfu Fan, Min Gan, C. L. Philip Chen
Therefore, we propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
no code implementations • 25 May 2023 • Jian-Nan Su, Min Gan, Guang-Yong Chen, Wenzhong Guo, C. L. Philip Chen
Based on these findings, we introduced a concise yet effective soft thresholding operation to obtain high-similarity-pass attention (HSPA), which is beneficial for generating a more compact and interpretable distribution.
no code implementations • 12 May 2023 • Min Gan, Xiang-xiang Su, Guang-Yong Chen, Jing Chen
In one routine of the proposed algorithm, the linear parameters are updated by the recursive least squares (RLS) algorithm, which is equivalent to a stochastic Newton method; then, based on the updated linear parameters, the nonlinear parameters are updated by the stochastic gradient method (SGD).
no code implementations • 3 Apr 2023 • Guang-Yong Chen, Yong-Hang Yu, Min Gan, C. L. Philip Chen, Wenzhong Guo
Random functional-linked types of neural networks (RFLNNs), e. g., the extreme learning machine (ELM) and broad learning system (BLS), which avoid suffering from a time-consuming training process, offer an alternative way of learning in deep structure.
no code implementations • 14 Jan 2023 • Jinyang Wang, Tao Wang, Min Gan, George Hadjichristofi
Deep convolutional neural networks have been widely used in scene classification of remotely sensed images.
1 code implementation • 2 Dec 2022 • Jian-Nan Su, Min Gan, Guang-Yong Chen, Jia-Li Yin, C. L. Philip Chen
Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product.
no code implementations • 23 Jul 2021 • Bowen Hu, Baiying Lei, Shuqiang Wang, Yong liu, BingChuan Wang, Min Gan, Yanyan Shen
A branching predictor and several hierarchical attention pipelines are constructed to generate point clouds that accurately describe the incomplete images and then complete these point clouds with high quality.