no code implementations • 25 Apr 2024 • Xu Zheng, Pengyuan Zhou, Athanasios V. Vasilakos, Lin Wang
However, as the distinct projections make it less possible to directly transfer knowledge between domains, we then propose Reliable Panoramic Prototype Adaptation Module (RP2AM) to transfer knowledge at both prediction and prototype levels.
no code implementations • 25 Mar 2024 • Weiming Zhang, Yexin Liu, Xu Zheng, Lin Wang
To this end, we propose a novel framework, called GoodSAM, that introduces a teacher assistant (TA) to provide semantic information, integrated with SAM to generate ensemble logits to achieve knowledge transfer.
no code implementations • 21 Mar 2024 • Xu Zheng, Lin Wang
To this end, we propose a novel framework, dubbed EventDance for this unsupervised source-free cross-modal adaptation problem.
no code implementations • 19 Mar 2024 • Jiazhou Zhou, Xu Zheng, Yuanhuiyi Lyu, Lin Wang
Then, we propose a conceptual reasoning-based uncertainty estimation module, which simulates the recognition process to enrich the semantic representation.
no code implementations • 19 Mar 2024 • Yuanhuiyi Lyu, Xu Zheng, Jiazhou Zhou, Lin Wang
To make this possible, we 1) construct a knowledge base of text embeddings with the help of LLMs and multi-modal LLMs; 2) adaptively build LLM-augmented class-wise embedding center on top of the knowledge base and encoded visual embeddings; 3) align all the embeddings to the LLM-augmented embedding center via contrastive learning to achieve a unified and balanced representation space.
no code implementations • 19 Mar 2024 • Xu Zheng, Pengyuan Zhou, Athanasios V. Vasilakos, Lin Wang
However, the distinct projection discrepancies between source and target domains impede the direct knowledge transfer; thus, we propose a panoramic prototype adaptation module (PPAM) to integrate panoramic prototypes from the extracted knowledge for adaptation.
1 code implementation • 16 Feb 2024 • Xu Zheng, Tianchun Wang, Wei Cheng, Aitian Ma, Haifeng Chen, Mo Sha, Dongsheng Luo
In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format.
no code implementations • 7 Feb 2024 • Xu Zheng, Farhad Shirani, Tianchun Wang, Shouwei Gao, Wenqian Dong, Wei Cheng, Dongsheng Luo
It is shown that the sample complexity of explanation-assisted learning can be arbitrarily smaller than explanation-agnostic learning.
no code implementations • 31 Jan 2024 • Yuanhuiyi Lyu, Xu Zheng, Lin Wang
It extracts entity features from the multi-modal representations powered by our specially constructed entity knowledge graph; 2) Attribute Fusion Branch adeptly preserves and processes the attributes.
no code implementations • 11 Oct 2023 • Xu Zheng, Yunhao Luo, Pengyuan Zhou, Lin Wang
Due to the completely different characteristics of ViT and CNN and the long-existing capacity gap between teacher and student models in Knowledge Distillation (KD), directly transferring the cross-model knowledge is non-trivial.
1 code implementation • 3 Oct 2023 • Xu Zheng, Farhad Shirani, Tianchun Wang, Wei Cheng, Zhuomin Chen, Haifeng Chen, Hua Wei, Dongsheng Luo
An explanation function for GNNs takes a pre-trained GNN along with a graph as input, to produce a `sufficient statistic' subgraph with respect to the graph label.
no code implementations • 3 Oct 2023 • Jialei Chen, Daisuke Deguchi, Chenkai Zhang, Xu Zheng, Hiroshi Murase
Moreover, to enhance the ability to discriminate unseen categories, PLM consisting of pseudo labels and weight generation is designed.
no code implementations • 17 Sep 2023 • Jiahang Cao, Xu Zheng, Yuanhuiyi Lyu, Jiaxu Wang, Renjing Xu, Lin Wang
The ability to detect objects in all lighting (i. e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving. Traditional RGB-based detectors often fail under such varying lighting conditions. Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection.
no code implementations • ICCV 2023 • Xu Zheng, Tianbo Pan, Yunhao Luo, Lin Wang
The aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non-uniformly distributed pixels of equirectangular projection (ERP).
no code implementations • 6 Aug 2023 • Jiazhou Zhou, Xu Zheng, Yuanhuiyi Lyu, Lin Wang
Accordingly, we first introduce a novel event encoder that subtly models the temporal information from events and meanwhile, generates event prompts for modality bridging.
no code implementations • ICCV 2023 • Jinjing Zhu, Yunhao Luo, Xu Zheng, Hao Wang, Lin Wang
In this paper, we strive to answer the question "how to collaboratively learn convolutional neural network (CNN)-based and vision transformer (ViT)-based models by selecting and exchanging the reliable knowledge between them for semantic segmentation?"
no code implementations • CVPR 2023 • Xu Zheng, Jinjing Zhu, Yexin Liu, Zidong Cao, Chong Fu, Lin Wang
Moreover, adversarial intra-projection training is proposed to reduce the inherent gap, between the features of the pinhole images and those of the ERP and TP images, respectively.
1 code implementation • 17 Feb 2023 • Xu Zheng, Yexin Liu, Yunfan Lu, Tongyan Hua, Tianbo Pan, Weiming Zhang, DaCheng Tao, Lin Wang
Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously and produce event streams encoding the time, pixel position, and polarity (sign) of the intensity changes.
no code implementations • 16 Oct 2022 • Emily Muller, Xu Zheng, Jer Hayes
Generative models have been found effective for data synthesis due to their ability to capture complex underlying data distributions.
no code implementations • 6 Sep 2022 • Xu Zheng, Yunhao Luo, Chong Fu, Kangcheng Liu, Lin Wang
To this end, we propose class-aware feature consistency distillation (CFCD) that first leverages the outputs of each student as the pseudo labels and generates class-aware feature (CF) maps for knowledge transfer between the two students.
1 code implementation • 4 Jun 2022 • Yunfan Lu, Yiqi Lin, Hao Wu, Yunhao Luo, Xu Zheng, Hui Xiong, Lin Wang
Image restoration and enhancement is a process of improving the image quality by removing degradations, such as noise, blur, and resolution degradation.
no code implementations • 14 Jan 2022 • Emily Muller, Xu Zheng, Jer Hayes
Deep generative models are effective data synthesisers due to their ability to capture complex underlying distributions.
no code implementations • 23 Nov 2021 • Xu Zheng, Chong Fu, Haoyu Xie, Jialei Chen, Xingwei Wang, Chiu-Wing Sham
However, due to the scarcity of labeled data, the features extracted by the models are limited in supervised learning, and the quality of predictions for unlabeled data also cannot be guaranteed.
no code implementations • 17 Nov 2021 • Ramon Vinas, Xu Zheng, Jer Hayes
Our work can facilitate the diagnosis of novel diseases based on the clinical history of past events, with the potential to increase our understanding of the landscape of comorbidities.
no code implementations • 17 Nov 2021 • Xu Zheng, Nicholas McCarthy, Jer Hayes
Differential privacy is a gold standard for data privacy, and the introduction of the differentially private stochastic gradient descent (DP-SGD) algorithm has facilitated the training of private neural models in a number of domains.
no code implementations • 19 Jan 2021 • Xu Zheng, Baowen Li
We propose that the optomechanical systems can be potential platforms to implement the Fr\"{o}hlich condensate of phonons.
Quantum Physics Mesoscale and Nanoscale Physics
no code implementations • 3 Sep 2019 • Xu Zheng, Tejo Chalasani, Koustav Ghosal, Sebastian Lutz, Aljosa Smolic
The success of training deep Convolutional Neural Networks (CNNs) heavily depends on a significant amount of labelled data.
no code implementations • 29 Dec 2017 • Su Yan, Wei. Lin, Tianshu Wu, Daorui Xiao, Xu Zheng, Bo Wu, Kaipeng Liu
Given a search request, ad retrieval module rewrites the query into bidding keywords, and uses these keywords as keys to select Top N ads through inverted indexes.