1 code implementation • 3 Mar 2024 • Chenying Liu, Conrad M Albrecht, Yi Wang, Qingyu Li, Xiao Xiang Zhu
AIO2 utilizes a mean teacher model to enhance training robustness with noisy labels to both stabilize the training accuracy curve for fitting in ACT and provide pseudo labels for correction in O2C.
no code implementations • 25 Feb 2024 • Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu
In this work, we propose to explore the under-exploited potential of noisy labels for segmentation task specific pretraining, and exam its robustness when confronted with mismatched categories and different decoders during fine-tuning.
2 code implementations • 11 Sep 2023 • Yi Wang, Conrad M Albrecht, Nassim Ait Ali Braham, Chenying Liu, Zhitong Xiong, Xiao Xiang Zhu
We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning.
1 code implementation • 4 Aug 2023 • Yi Wang, Chenying Liu, Arti Tiwari, Micha Silver, Arnon Karnieli, Xiao Xiang Zhu, Conrad M Albrecht
Discovering ancient agricultural terraces in desert regions is important for the monitoring of long-term climate changes on the Earth's surface.
no code implementations • 9 Jun 2023 • Wenlu Sun, Yao Sun, Chenying Liu, Conrad M Albrecht
Urban land use structures impact local climate conditions of metropolitan areas.
3 code implementations • 13 Nov 2022 • Yi Wang, Nassim Ait Ali Braham, Zhitong Xiong, Chenying Liu, Conrad M Albrecht, Xiao Xiang Zhu
Self-supervised pre-training bears potential to generate expressive representations without human annotation.
Ranked #1 on Multi-Label Image Classification on BigEarthNet (official test set) (using extra training data)
no code implementations • 14 Jun 2022 • Conrad M Albrecht, Chenying Liu, Yi Wang, Levente Klein, Xiao Xiang Zhu
We present and evaluate a weakly-supervised methodology to quantify the spatio-temporal distribution of urban forests based on remotely sensed data with close-to-zero human interaction.
no code implementations • 9 Dec 2019 • Chenying Liu, Jun Li, Lin He, Antonio J. Plaza, Shutao Li, Bo Li
Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and high-frequency components of data controlled by the kernel phase.