Search Results for author: Changwen Zheng

Found 34 papers, 15 papers with code

Meta-Auxiliary Learning for Micro-Expression Recognition

no code implementations18 Apr 2024 Jingyao Wang, Yunhan Tian, Yuxuan Yang, Xiaoxin Chen, Changwen Zheng, Wenwen Qiang

Micro-expressions (MEs) are involuntary movements revealing people's hidden feelings, which has attracted numerous interests for its objectivity in emotion detection.

Auxiliary Learning Micro Expression Recognition +1

Intriguing Properties of Positional Encoding in Time Series Forecasting

1 code implementation16 Apr 2024 Jianqi Zhang, Jingyao Wang, Wenwen Qiang, Fanjiang Xu, Changwen Zheng, Fuchun Sun, Hui Xiong

Motivated by these findings, we introduce two new PEs: Temporal Position Encoding (T-PE) for temporal tokens and Variable Positional Encoding (V-PE) for variable tokens.

Time Series Time Series Forecasting

Graph Partial Label Learning with Potential Cause Discovering

no code implementations18 Mar 2024 Hang Gao, Jiaguo Yuan, Jiangmeng Li, Chengyu Yao, Fengge Wu, Junsuo Zhao, Changwen Zheng

PLL is a critical weakly supervised learning problem, where each training instance is associated with a set of candidate labels, including both the true label and additional noisy labels.

Graph Representation Learning Partial Label Learning +1

BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction

1 code implementation25 Jan 2024 Jiangmeng Li, Fei Song, Yifan Jin, Wenwen Qiang, Changwen Zheng, Fuchun Sun, Hui Xiong

From the perspective of distribution analyses, we disclose that the intrinsic issues behind the phenomenon are the over-multitudinous conceptual knowledge contained in PLMs and the abridged knowledge for target downstream domains, which jointly result in that PLMs mis-locate the knowledge distributions corresponding to the target domains in the universal knowledge embedding space.

Domain Adaptation

T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration

no code implementations19 Jan 2024 Chuxiong Sun, Zehua Zang, Jiabao Li, Jiangmeng Li, Xiao Xu, Rui Wang, Changwen Zheng

This process enables agents to collectively use evidence garnered from multiple perspectives, fostering trusted and cooperative behaviors.

SMAC+

Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning

1 code implementation21 Dec 2023 Jiangmeng Li, Yifan Jin, Hang Gao, Wenwen Qiang, Changwen Zheng, Fuchun Sun

To this end, we propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning, which introduces knowledge distillations to empower GCL models to learn the hierarchical topology isomorphism expertise, including the graph-tier and subgraph-tier.

Contrastive Learning Graph Representation Learning +1

Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective

1 code implementation16 Dec 2023 Qirui Ji, Jiangmeng Li, Jie Hu, Rui Wang, Changwen Zheng, Fanjiang Xu

To this end, with the purpose of exploring the intrinsic rationale of graphs, we accordingly propose to capture the dimensional rationale from graphs, which has not received sufficient attention in the literature.

Contrastive Learning Meta-Learning

Rethinking Causal Relationships Learning in Graph Neural Networks

1 code implementation15 Dec 2023 Hang Gao, Chengyu Yao, Jiangmeng Li, Lingyu Si, Yifan Jin, Fengge Wu, Changwen Zheng, Huaping Liu

In order to comprehensively analyze various GNN models from a causal learning perspective, we constructed an artificially synthesized dataset with known and controllable causal relationships between data and labels.

Hacking Task Confounder in Meta-Learning

1 code implementation10 Dec 2023 Jingyao Wang, Yi Ren, Zeen Song, Jianqi Zhang, Changwen Zheng, Wenwen Qiang

However, our experiments reveal an unexpected result: there is negative knowledge transfer between tasks, affecting generalization performance.

Meta-Learning Transfer Learning

Unleash Model Potential: Bootstrapped Meta Self-supervised Learning

no code implementations28 Aug 2023 Jingyao Wang, Zeen Song, Wenwen Qiang, Changwen Zheng

The long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision, mimicking three advantages of human cognition: i) no need for labels, ii) robustness to data scarcity, and iii) learning from experience.

Meta-Learning Self-Supervised Learning

Information Theory-Guided Heuristic Progressive Multi-View Coding

no code implementations21 Aug 2023 Jiangmeng Li, Hang Gao, Wenwen Qiang, Changwen Zheng

To this end, we rethink the existing multi-view learning paradigm from the perspective of information theory and then propose a novel information theoretical framework for generalized multi-view learning.

Contrastive Learning MULTI-VIEW LEARNING +1

CSSL-RHA: Contrastive Self-Supervised Learning for Robust Handwriting Authentication

no code implementations18 Jul 2023 Jingyao Wang, Luntian Mou, Changwen Zheng, Wen Gao

In this paper, we propose a novel Contrastive Self-Supervised Learning framework for Robust Handwriting Authentication (CSSL-RHA) to address these issues.

Self-Supervised Learning

Towards the Sparseness of Projection Head in Self-Supervised Learning

no code implementations18 Jul 2023 Zeen Song, Xingzhe Su, Jingyao Wang, Wenwen Qiang, Changwen Zheng, Fuchun Sun

In recent years, self-supervised learning (SSL) has emerged as a promising approach for extracting valuable representations from unlabeled data.

Contrastive Learning Self-Supervised Learning

Towards Task Sampler Learning for Meta-Learning

1 code implementation18 Jul 2023 Jingyao Wang, Wenwen Qiang, Xingzhe Su, Changwen Zheng, Fuchun Sun, Hui Xiong

We obtain three conclusions: (i) there is no universal task sampling strategy that can guarantee the optimal performance of meta-learning models; (ii) over-constraining task diversity may incur the risk of under-fitting or over-fitting during training; and (iii) the generalization performance of meta-learning models are affected by task diversity, task entropy, and task difficulty.

Few-Shot Learning General Knowledge

Unbiased Image Synthesis via Manifold Guidance in Diffusion Models

no code implementations17 Jul 2023 Xingzhe Su, Daixi Jia, Fengge Wu, Junsuo Zhao, Changwen Zheng, Wenwen Qiang

In response, we propose a plug-and-play method named Manifold Guidance Sampling, which is also the first unsupervised method to mitigate bias issue in DDPMs.

Image Generation

A Dimensional Structure based Knowledge Distillation Method for Cross-Modal Learning

no code implementations28 Jun 2023 Lingyu Si, Hongwei Dong, Wenwen Qiang, Junzhi Yu, Wenlong Zhai, Changwen Zheng, Fanjiang Xu, Fuchun Sun

To address this issue, in this paper, we discover the correlation between feature discriminability and dimensional structure (DS) by analyzing and observing features extracted from simple and hard tasks.

Knowledge Distillation

Manifold Constraint Regularization for Remote Sensing Image Generation

no code implementations31 May 2023 Xingzhe Su, Changwen Zheng, Wenwen Qiang, Fengge Wu, Junsuo Zhao, Fuchun Sun, Hui Xiong

This study identifies a previously overlooked issue: GANs exhibit a heightened susceptibility to overfitting on remote sensing images. To address this challenge, this paper analyzes the characteristics of remote sensing images and proposes manifold constraint regularization, a novel approach that tackles overfitting of GANs on remote sensing images for the first time.

Image Generation

Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation

no code implementations9 Mar 2023 Xingzhe Su, Wenwen Qiang, Jie Hu, Fengge Wu, Changwen Zheng, Fuchun Sun

Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information.

counterfactual Counterfactual Explanation +1

Introducing Expertise Logic into Graph Representation Learning from A Causal Perspective

no code implementations20 Jan 2023 Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Xingzhe Su, Fengge Wu, Changwen Zheng, Fuchun Sun

By further observing the ramifications of introducing expertise logic into graph representation learning, we conclude that leading the GNNs to learn human expertise can improve the model performance.

Graph Representation Learning Knowledge Graphs

Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

2 code implementations16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Farid Razzak, Ji-Rong Wen, Hui Xiong

To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views.

Representation Learning Self-Supervised Learning

MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

2 code implementations16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample.

Contrastive Learning Meta-Learning +1

Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective

1 code implementation26 Aug 2022 Jiangmeng Li, Yanan Zhang, Wenwen Qiang, Lingyu Si, Chengbo Jiao, Xiaohui Hu, Changwen Zheng, Fuchun Sun

To understand the reasons behind this phenomenon, we revisit the learning paradigm of knowledge distillation on the few-shot object detection task from the causal theoretic standpoint, and accordingly, develop a Structural Causal Model.

Few-Shot Learning Few-Shot Object Detection +4

Robust Causal Graph Representation Learning against Confounding Effects

1 code implementation18 Aug 2022 Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, Fuchun Sun

This observation reveals that there exist confounders in graphs, which may interfere with the model learning semantic information, and current graph representation learning methods have not eliminated their influence.

Graph Representation Learning

Interventional Contrastive Learning with Meta Semantic Regularizer

no code implementations29 Jun 2022 Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong

Contrastive learning (CL)-based self-supervised learning models learn visual representations in a pairwise manner.

Contrastive Learning Representation Learning +1

SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders

1 code implementation21 Jun 2022 Gang Li, Heliang Zheng, Daqing Liu, Chaoyue Wang, Bing Su, Changwen Zheng

In this paper, we explore a potential visual analogue of words, i. e., semantic parts, and we integrate semantic information into the training process of MAE by proposing a Semantic-Guided Masking strategy.

Language Modelling Masked Language Modeling +1

Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt

no code implementations23 May 2022 Jiangmeng Li, Wenyi Mo, Wenwen Qiang, Bing Su, Changwen Zheng

Vision-language models are pre-trained by aligning image-text pairs in a common space so that the models can deal with open-set visual concepts by learning semantic information from textual labels.

Domain Generalization Language Modelling

MetAug: Contrastive Learning via Meta Feature Augmentation

2 code implementations10 Mar 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Hui Xiong

We perform a meta learning technique to build the augmentation generator that updates its network parameters by considering the performance of the encoder.

Contrastive Learning Informativeness +1

Robust Local Preserving and Global Aligning Network for Adversarial Domain Adaptation

no code implementations8 Mar 2022 Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong

We conduct theoretical analysis on the robustness of the proposed RLPGA and prove that the robust informative-theoretic-based loss and the local preserving module are beneficial to reduce the empirical risk of the target domain.

Unsupervised Domain Adaptation

Bootstrapping Informative Graph Augmentation via A Meta Learning Approach

1 code implementation11 Jan 2022 Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Fuchun Sun, Changwen Zheng

To this end, we propose a novel approach to learning a graph augmenter that can generate an augmentation with uniformity and informativeness.

Contrastive Learning Informativeness +2

SimViT: Exploring a Simple Vision Transformer with sliding windows

2 code implementations24 Dec 2021 Gang Li, Di Xu, Xing Cheng, Lingyu Si, Changwen Zheng

Although vision Transformers have achieved excellent performance as backbone models in many vision tasks, most of them intend to capture global relations of all tokens in an image or a window, which disrupts the inherent spatial and local correlations between patches in 2D structure.

Domain-Invariant Representation Learning with Global and Local Consistency

no code implementations29 Sep 2021 Wenwen Qiang, Jiangmeng Li, Jie Hu, Bing Su, Changwen Zheng, Hui Xiong

In this paper, we give an analysis of the existing representation learning framework of unsupervised domain adaptation and show that the learned feature representations of the source domain samples are with discriminability, compressibility, and transferability.

Representation Learning Unsupervised Domain Adaptation

Information Theory-Guided Heuristic Progressive Multi-View Coding

no code implementations6 Sep 2021 Jiangmeng Li, Wenwen Qiang, Hang Gao, Bing Su, Farid Razzak, Jie Hu, Changwen Zheng, Hui Xiong

To this end, we rethink the existing multi-view learning paradigm from the information theoretical perspective and then propose a novel information theoretical framework for generalized multi-view learning.

Contrastive Learning MULTI-VIEW LEARNING +1

Cannot find the paper you are looking for? You can Submit a new open access paper.